Elon Musk’s accepted offer to buy Twitter has sparked much debate about what it means for the future of the social media platform, which plays a big role in determining the news and information that many people – especially Americans – are at risk.
Musk said he wanted make Twitter an arena for free speech. It’s unclear what that will mean, and his statements have fueled speculation among supporters and detractors alike. As a company, Twitter can regulate speech on its platform as it sees fit. The US Congress and the European Union are considering bills regarding the regulation of social media, but they relate to transparency, accountability, harmful illegal content and the protection of users’ rights, rather than the regulation of speech.
Musk’s calls for free speech on Twitter focus on two allegations: political bias and excessive moderation. As researchers of online disinformation and manipulation, my colleagues and I at the Indiana University Observatory on Social Media study the dynamics and impact of Twitter and its abuse. To make sense of Musk’s statements and the possible outcomes of his acquisition, let’s take a look at what the research shows.
Many conservative politicians and experts have alleged for years that major social media platforms, including Twitter, have a liberal political bias tantamount to censorship of conservative views. These claims are based on anecdotal evidence. For example, many supporters whose tweets have been labeled as misleading and downgraded, or whose accounts have been suspended for violating the platform’s terms of service, say Twitter targeted them because of their political views.
Unfortunately, Twitter and other platforms often apply their policies inconsistently, so it’s easy to find examples supporting one conspiracy theory or another. A review by New York University’s Center for Business and Human Rights found no reliable evidence to support the allegation of anti-conservative bias by social media companies, even calling the allegation her -even in the form of misinformation.
A more direct assessment of political bias by Twitter is difficult due to the complex interactions between people and algorithms. People, of course, have political biases. For example, our experiments with political social bots found that Republican users are more likely to mistake conservative bots for humans, while Democratic users are more likely to mistake conservative human users for bots.
To remove human bias from the equation of our experiences, we deployed a group of benign social bots on Twitter. Each of these bots started by following a news source, with some bots following a liberal source and some a conservative source. After this initial friend, all bots were left alone to “drift” through the information ecosystem for a few months. They might gain followers. They acted according to an identical algorithmic behavior. This included following or following random accounts, tweeting meaningless content, and retweeting or copying random posts to their feed.
But this behavior was politically neutral, without any understanding of the content seen or published. We tracked the bots to probe for political biases emerging from how Twitter works or how users interact.
Surprisingly, our research has provided evidence that Twitter has a conservative rather than a liberal bias. On average, accounts are drawn to the conservative side. Liberal accounts were exposed to moderate content, which shifted their experience to the political center, while interactions from right-wing accounts were skewed towards posting conservative content. Accounts that followed conservative news sources also received more politically aligned followers, fitting into denser echo chambers and gaining influence within these partisan communities.
These differences in experiences and actions can be attributed to user interactions and information conveyed by the social media platform. But we couldn’t directly examine possible bias in Twitter’s News Feed algorithm because the actual ranking of posts in the “home timeline” is not available to outside researchers.
Twitter researchers were, however, able to audit the effects of their ranking algorithm on political content, revealing that the political right enjoys higher amplification than the political left. Their experiment showed that in six of the seven countries studied, conservative politicians enjoy greater algorithmic amplification than liberals. They also found that algorithmic amplification favors right-wing news sources in the United States.
Our research and Twitter research shows that Musk’s apparent concern about bias on Twitter against the Conservatives is baseless.
Arbitrators or censors?
The other allegation Musk appears to be making is that excessive moderation stifles free speech on Twitter. The concept of a free market of ideas is rooted in John Milton’s age-old reasoning that truth prevails in a free and open exchange of ideas. This view is often cited as the basis of arguments against moderation: accurate, relevant, and timely information should emerge spontaneously from user interactions.
Unfortunately, several aspects of modern social media hinder the free market of ideas. Limited attention and confirmation bias increase vulnerability to misinformation. Commitment-based ranking can amplify noise and manipulation, and the structure of information networks can distort perceptions and be “manipulated” to favor one group.
As a result, social media users have in recent years been the victims of manipulation by astroturfing, trolling and misinformation. Abuse is facilitated by social bots and coordinated networks that create the appearance of human mobs.
We and other researchers have observed these inauthentic narratives amplifying misinformation, influencing elections, committing financial fraud, infiltrating vulnerable communities, and disrupting communication. Musk tweeted that he wanted defeat spambots and authenticate humansbut these are not easy or necessarily effective solutions.
Inauthentic accounts are used for malicious purposes beyond spam and are difficult to detect, especially when exploited by people in conjunction with software algorithms. And removing anonymity can hurt vulnerable groups. In recent years, Twitter has adopted policies and systems to moderate abuse by aggressively suspending accounts and networks displaying inauthentic coordinated behavior. A weakening of these moderation policies could make abuse rampant again.
Despite Twitter’s recent progress, integrity remains a challenge on the platform. Our lab discovers new types of sophisticated manipulations, which we will present at the AAAI International Web and Social Media Conference in June. Malicious users exploit so-called “follow trains” – groups of people who follow each other on Twitter – to rapidly grow their followers and create large, dense, hyperpartisan echo chambers that amplify toxic content from sources unreliable and conspiratorial.
Another effective malicious technique is to post and then strategically remove content that violates the platform’s terms once it has served its purpose. Even Twitter’s lofty limit of 2,400 tweets per day can be circumvented by deletions: we’ve identified numerous accounts that flood the network with tens of thousands of tweets per day.
We have also found coordinated networks that engage in repetitive likes and likes of content that is eventually removed, which can manipulate ranking algorithms. These techniques allow malicious users to inflate the popularity of content while evading detection.
Musk’s plans for Twitter are unlikely to do anything about these manipulative behaviors.
Content moderation and freedom of expression
Musk’s likely acquisition of Twitter raises fears that the social media platform could decrease moderation of its content. This body of research shows that stronger, not weaker, moderation of the information ecosystem is needed to combat harmful misinformation.
It also shows that weaker moderation policies would ironically harm free speech: the voices of real users would be drowned out by malicious users who manipulate Twitter through inauthentic accounts, bots and echo chambers.
This article by Filippo Menczer, professor of computer science and computer science at Indiana University, is republished from The Conversation under a Creative Commons license. Read the original article.