Advertisement

Column: On social media, the ‘fog of war’ is a feature, not a bug

Burned-out vehicles sit on scorched ground.
The exact size and location of the crater outside the Ahli Arab Hospital in Gaza was among the factors that internet sleuths seized on to make the case for who was responsible — Israel or Hamas.
(Abed Khaled / Associated Press)
Share

It’s a bad time to be online.

If you’ve logged on to any given social media platform in the last two weeks, you’ll know what I’m talking about: Ever since Hamas unleashed its horrific attack on Israel, and Israel unleashed its horrific retaliatory bombing campaign on Gaza, there has not only been a deluge of heartbreaking and disturbing stories and images, but of fake videos, out-of-context posts, phony experts, enraged screeds and falsified news — all raining down our feeds in biblical storm-like volumes.

Disinformation researchers and journalists have called the mess an “algorithmically driven fog of war” and news analysts have decried the flurry of bad info, the harder-than-ever effort of sorting fact from fiction online. It culminated this week in a mad scramble to parse the blame for a terrible attack on a Gaza hospital that left many civilians dead, Hamas-allied groups blaming Israel and vice versa, and a legion of online sleuths posting away in a mostly vain pursuit of the truth.

But let’s be clear about a few things — this digital fog of war has existed as long as social media’s been around to angrily scroll through in times of crisis. And even if that haze has occasionally been punctured for the greater good, as when it’s been used for citizen journalism and dissident organizing against oppressive regimes, social media’s incentive structure chiefly benefits the powerful and the unscrupulous; it rewards propagandists and opportunists, hucksters and clout-chasers.

Advertisement

Social media is dying after the implosion of Twitter. Threads, Bluesky and Mastodon are trying to replace X but are coming up short. Maybe that’s OK.

Aug. 7, 2023

As so many of us feel furious and powerless, we might take this occasion to consider the ways that social media delude us into believing we’re interacting with history, rather than yelling at a screen, and how Big Tech bends that impulse to its benefit too, encouraging us to spew out increasingly polemical posts, even before the facts are at all clear, promising to reward the most inflammatory with notoriety, and perhaps even payouts.

If we ever hope to fix any of that, and shape a social network that aspires to reliably disseminate factual information, we should pay close attention to what’s happening in the digital trenches of the platforms right now — and the particular ways that our current crop is failing.

Many have been quick to blame Elon Musk for the worst failings of the social media ecosystem. After all, Musk owns X, the platform formerly known as Twitter, once considered the premier online destination for finding up-to-the-second information on the major happenings of the world. Without doubt, X nee Twitter has become a significantly less reliable source for news since Musk took over and gutted the content moderation teams in charge of keeping hoaxes, harassment and bad info at bay.

To make matters worse, Musk’s substitution of the previous “blue check” system, which, while imperfect, sought to verify identities of officials and newsmakers, with a pay-to-play system that lets anyone purchase that verification for $8 a month, means that “verified” sources can push bad info — and even earn cash for it through X’s creator revenue sharing program. (A study by the news-rating service NewsGuard found that 74% of viral false or unsubstantiated claims about the Israel-Hamas war were spread by paid “verified” accounts.) Now it’s just a skeleton crew against millions of posts every day, new incentives for power users to post vitriolic rubbish, and a bare-bones “community notes” program where users can volunteer clarifications and context.

But the competition isn’t faring much better. Facebook and TikTok have been actively working to limit the amount of news that even shows up on their platforms in the first place. For one thing, they’re protesting new and proposed laws in Canada and the U.S. that compel tech giants to compensate the media companies that produce content that gets shared on their platforms. For another, news is harder and more costly to moderate than vacation pics and celebrity sponcon. On Instagram, Meta has been erroneously inserting the word “terrorist” into text translations of user account bios that contained the word “Palestinian.”

There’s a paradox at the heart of Facebook, Twitter, Reddit and other companies that rely on user-generated content — and it’s leading to their downfall.

July 10, 2023

But this problem hardly began with Musk or TikTok. What we today call misinformation has attended every major crisis or catastrophe since the era of following them online has begun; it’s a symptom of large-scale social media, period. There are always hoax images — that shark swimming down the highway after any major city floods, a recycled photo from a previous tragedy, a horror transposed from another context — and breaking “news” that turns out to be false or half-true.

Advertisement

This is because social media are not in any way built to be news delivery services. As numerous scholars have shown, social platforms that are engineered to reach (and serve ads to) as many people as possible are built to incentivize inflammatory content: violent stuff, the polemics, the sensational fakes. This may be common knowledge by now, but the trend has only been exacerbated by the removal of buffers such as robust content moderation or trust and safety teams. It’s digital cable news at best, an unhinged 8chan comment board at worst.

Take the all-consuming online skirmish that unfolded this week over whether it was Israel that bombed a hospital in Gaza, killing at least 500 people, or a Palestinian rocket that misfired, killing far fewer. Right out of the gate, ideologies ran hot, and perhaps the biggest predictor of what your online explanation of the tragedy would be was your political orientation.

Amateur sleuths took to available satellite imagery and footage of the arcing rockets to spam out lengthy threads detailing why or why not one side was responsible based on factors like whether the impact crater from the blast could or could not be the size seen in the available footage. It reminded me of the Reddit detectives who went into overdrive after the Boston marathon bombing 10 years ago, piecing together evidence from digital errata from cellphone videos and news footage, and ultimately fingering an innocent bystander as the culprit.

That said, the example also highlights a fact of the new media environment: In the algorithmic fog of war, those with more power and resources have a distinct advantage. While Gaza officials blamed the Israel Defense Forces for the attack in a statement, the IDF responded with a far slicker social media package to rebut the claims — a graphic-laden series of posts claiming the explosion was the result of a misfiring rocket, complete with what it claimed was intercepted audio of Hamas fighters discussing the accident in peculiar levels of detail. Critics accused the media product of being staged, noted that Israel has falsely denied responsibility before, and ‘round and ‘round we went.

It’s tempting to write off Elon Musk’s bad business decisions, such as trying to charge Twitter users for blue checks. But his naked cash grab is part of a sea change we should all take seriously.

April 27, 2023

As was true 10 years ago, the online hyperactivity — the fevered theorizing, the parsing of screenshots, the relentless opining — served very little purpose in the end, at least with regard to 99% of those involved. Little was accomplished that would not have been if the posters simply waited for journalists and investigators to carry out their work.

Social media can still be crucial. I was on a flight yesterday, scrolling X for hours until I nearly dissociated. So I turned on CNN, and what I saw there was somehow even worse — wall-to-wall coverage aligned almost exclusively with Israel’s viewpoint. A story about how Hamas was seeding disinformation online, about the victims of Hamas’ brutal slaughter, about a heroic Israeli who fought back against the militants, about the Biden administration backing Israeli claims that Gazans were to blame for the explosion in Gaza. None of which would be terrible stories on their own, but in the hours of coverage I watched, there was just one story about Gaza, and it was about an American doctor stranded there. To see evidence of the fallout from Israel’s bombing campaign, I had to turn to social media. Despite everything, it’s still the place where you can hear the voices not broadcast anywhere else. There is a reason, after all, that Israel is seeking to cut off Gaza’s internet access.

Advertisement

But we urgently need to figure out how to up the quotient of reliability and safety on the platforms, restrain our worst impulses in using them and increase our media literacy on them in general — none of which is likely to happen when the platforms in question are either run by a self-serving megalomaniac or are dependent on infinitely ratcheting up ad revenue, or both. In the age of COVID denial and QAnon and collapsing trust in our institutions everywhere, truth feels as malleable and elusive and even unknowable as ever; in the aftermath of an unspeakable attack that’s drawn comparisons to 9/11, and knowing how fallible our institutions were in the wake of that tragedy, it is indeed hard to know whom to trust, from mainstream to social media on up.

One thing is certain: We do need a place where we can grope toward shared understanding of world events. But the playpens owned by billionaires will never be that place.

Advertisement