It’s no secret that social media giants have been grappling with how to handle disinformation for years. This makes the recent news that some of these companies are backing away from addressing the problem so troubling. In fact, it could be argued that this is precisely the moment when these companies should be investing additional resources to take advantage of new technology and the public’s understanding and interest in the problem.
A recent New York Times article highlights that companies like Facebook and Twitter are pulling back when it comes to combatting disinformation on their platforms. While these tech giants have long claimed to take the issue seriously, the reality is that their efforts have fallen short. And in recent months, the evidence shows they are now less committed to tackling the problem than ever before.
We find this particularly concerning given the continued rise of disinformation, particularly from foreign malign actors seeking to create havoc in American civil discourse. Not only does disinformation seek to undermine public trust in the democratic process, but it also has real-world consequences and, in fact contributed to the violent insurrection at the US Capitol on January 6, 2021.
It’s not just political disinformation that’s a problem. Hate speech is again on the rise as companies like Twitter and other platforms remove moderation policies. A recent study shows that since Elon Musk’s takeover of Twitter, there has been a rapid rise in neo-Nazi and anti-Semitic content on the platform, particularly among younger users. The jump from online rhetoric to real-world violence is a small one.
Given these challenges, it’s clear that social media companies need to be doing more, not less, to combat disinformation. But why is now the right moment to double down on these efforts?
The emergence of new technologies is a new opportunity. As The New York Times reported, companies like Facebook are investing in AI and other tools that can detect and flag disinformation more effectively. While this technology is still in its early stages, it is growing quickly and holds great promise if used correctly. Conversely, these advancements also hold the potential for great abuse. Allowing these threats to flourish without making new efforts to address them is a recipe for disaster.
Americans are more aware of the problem now than ever before. The groundswell of concern about disinformation in recent years is often cited as a top concern by voters. People are realizing that what they see on social media isn’t always true and that it often sows division. They are demanding that platforms do more to address the problem, and social media companies would do well to listen; not only is this moderation the right thing to do, but it is what their customers and policymakers are calling for.
At the end of the day, social media companies have a responsibility to ensure that their platforms are not being used to spread disinformation, hate speech, and extremism. This is a difficult and complex challenge, but it’s one that they must take seriously. If they don’t, the consequences could be dire.
The fact that social media giants are cutting back on disinformation moderation is deeply concerning. This is the moment when they should be redoubling their efforts and taking advantage of new technologies and American growing awareness of the problem. It’s not an easy task, but it’s a necessary one. These platforms help shape public opinion, and with that power comes responsibility.
As the problem of disinformation grows, it is imperative that companies like Facebook, Twitter, YouTube, and others take responsibility for their part in it. This means not only investing in new technology to identify and mitigate the problem, but also working closely with experts, policymakers, consumers, and other stakeholders to develop effective strategies for combating disinformation. These efforts are what their customers want, what policymakers are calling for, and what decency demands.