For Spambots, Flattery Gets You Everywhere.

Fake accounts can lead to real emotions.

Kevan/Flickr/The Atlantic
Spam is the Internet's eternal squatter, the unwanted roommate who sees no reason to pay rent, yet borrows the Netflix password and regularly clogs the plumbing. But for whatever reason, we put up with it.
So, when spam gets kicked out, we celebrate—at least, we usually do. In December, when Instagram purged countless spam accounts, users whose popularity depended mainly on their army of followers lamented their loss. They missed the bots. The bots boosted their influence, even if that influence was artificial.
Of course, the bots weren't sophisticated followers these users needed; they had just become part of a new playing field. Social media sites like Instagram had allowed bots to inflate follower counts without incentivizing users to delete or report them. Ignore the bots, and the bots give you more likes and followers—what's wrong with that?

Plenty, said James Caverlee, a computer science professor at Texas A&M who studies the nature of spam on social media. In an email, Caverlee explained the human-bot relationship:
For spammers, there are a multitude of goals—some are aggressively promoting a product, some are trying to spread malware and phishing links, some are promoting some propaganda, some are just adding noise to the system, while others may be trying to build social capital (e.g., by accumulating followers or insinuating their way into your network) for some down-the-road reason.
For individuals, what is the reaction to those actions? Well, if a Twitter user's feed is suddenly filled with low-quality tweets from spam/robot accounts, then I would imagine a strong disincentive to use Twitter any more. But if the bots are engaged in increasing follower counts or favoriting tweets, then I can definitely see users treating that engagement as much more innocuous, if not somewhat favorable.
To him, the new relationship between human users and robot spam isn't just a product of the changing landscape. It's a product of spambots reflecting the way users act online so they can avoid detection and deletion, by adopting a new personality. I'm calling this personality "the confidence bot."
The confidence bot is spam that acts like con artists. This spam doesn't bombard inboxes with dummy text and links to potentially harmful web sites. This spam doesn't act like spam at all: It flatters—or tries to make a user feel flattered—by benignly interacting with the user as a means to eventually achieve whatever ends it was meant to achieve (gaining back followers, promoting links, obtaining user information).
Manipulative spam isn't new: In 2008, Caverlee first studied spam targeting MySpace, and found the most successful spam profiles were deceptive ones that uses pornographic images to lure and influence users. In 2010, Caverlee conducted a followup study to observe Twitter, where he found that "social spammers" fell into certain "personalities." These bots followed simple, but different behavioral patterns: For example, "promoters" targeted businesses with sophisticated tweets and links, while "friend infiltrators" created seemingly legitimate profiles asking for follows back. And bots on the whole are getting sneakier, with the amount of "impersonator bots" (ones that steal data, attack network access, etc.) growing to make up 29 percent of website visits in 2014, security firm Incapsula found in December.
What is new is the spambots' access to information about their targets, Caverlee said. Bots can observe how often users tweet and what they tweet, then quietly insert themselves into the user's interactions, until the user forgets about them and learns to ignore future bots. In turn, bots must make themselves seem organic, favoriting a tweet or post here, following similar users there.
In other words, spambots can dupe us because they learned from us. Those Instagram users who depended on bot follows, Rider University psychology professor John Suler told me, demonstrated how far we've come in treating all online interactions in "an artificial way." As he wrote in a post for the Cyberpsychology Research Center, thinking of interactions as artificial boils social media use down to a competition of numbers: "[Users] see their peers becoming symbiotically dependent on garnering feedback and praise on social media ('no likes = no worth'), while losing the ability to establish their own sense of self-worth."
Still, just because spambots have gotten better at fooling users doesn't mean the Internet is caving to them. It also doesn't mean it's wrong for users to take pride in follows, likes, or any other metrics often seen on social media. It just means that no matter how often sites purge themselves of fake accounts, those fake accounts will always return. And they'll return smarter than before—so much so that when sites do kick them out, they're not just dealing with spam, but sometimes also with users' egos.


theatlantic.

Popular posts from this blog

UK GENERAL ELECTIONS:Inquiry announced into memo alleging Sturgeon wants Tory election victory.

Ebola Outbreak: Guinea Declares Emergency As Overall Deaths From Ebola Rise To 1,069