Advertisement

Opinion: How Google ignores social media’s consequences for children

A finger hovering over an iPhone displaying icons for Facebook and other apps.
A law giving social media platforms limited immunity from lawsuits is being considered by the Supreme Court this week.
(Jenny Kane / Associated Press)
Share

In legal disputes as in life, sometimes what isn’t said reveals more than what is. Consider the briefs filed with the Supreme Court in defense of a law granting Google and other tech companies limited immunity from lawsuits.

Gonzalez vs. Google, slated for oral argument before the Supreme Court on Tuesday, concerns Section 230 of the Communications Decency Act, a law enacted in 1996 to regulate the then-new internet. Child advocacy groups that filed friend-of-the-court briefs in the case note that social media platforms are knowingly hurting children by delivering dangerous content in an addictive manner. Tellingly, none of the scores of briefs filed on the side of the tech companies address those harms.

One of Congress’ primary purposes in enacting Section 230 was to provide, as some senators put it, “much-needed protection for children,” not just from explicit content but also from abuse. Ironically, the platforms are now arguing that Congress actually intended to offer them immunity for business decisions that they know will hurt children.

Advertisement

The Gonzalez case was brought by the family of an American murdered by ISIS in the 2015 Paris terrorist attacks. The family alleges that as a foreseeable consequence of efforts to keep as many eyes on Google’s YouTube as possible, terrorist recruitment videos are delivered to people who are likely to be interested in terrorism. In a similar case to be argued Wednesday, Twitter vs. Taamneh, the court will weigh whether the platforms’ alleged failure to take “meaningful steps” to remove terrorist content violates federal anti-terrorism law.

The repercussions of social media’s rise go well beyond increased access to terrorist content. During the years Instagram exploded from a million to a billion users, the United States saw an astonishing 146% spike in firearm suicides among children ages 10 to 14. The number of suicides overall for young people rose an unprecedented 57%. Although the correlation between the platforms’ growth and the youth mental illness crisis does not prove causation, Facebook’s leaked internal research noted that 6% of teen American Instagram users “trace their desire to kill themselves” to the platform.

Researchers and clinicians have likewise repeatedly documented widespread social media-related mental health and physical harms to children. Last Monday, the U.S. Centers for Disease Control and Prevention reported that teen girls are suffering from record levels of sadness and suicide risk, which some experts attribute partly to the rise of social media. And on Tuesday, a U.S. Senate committee heard gut-wrenching stories about the dangers of, as one grieving parent described it, the “unchecked power of the social media industry.”

Social media platforms make money by selling advertising. More time spent on a platform means more eyes on its ads, which means it can charge more for those ads. Plus, the more time a user spends on the platform, the more data the platform develops on the user, which it can in turn use to keep the user on the platform longer.

Humans aren’t personally sorting who sees what is on these platforms. Rather, humans give artificial intelligence technologies the instruction to maximize what platforms call “user engagement.” AI does this at fantastic speeds by testing what recommendations work best across billions of users. Then it delivers content based on not just what a child says she wants but also what is statistically most likely to keep children like her glued to the screen. Too often, the answer is whatever exploits her fears and anxieties.

This means that with disturbing frequency, depressed teens are offered suicide tips, body image-anxious girls get content promoting eating disorders and drug-curious youths get opportunities to buy pills laced with lethal fentanyl. Moreover, platforms use neuroscientifically tailored gimmicks such as auto-scrolling, constant reminders to return to the platform and dopamine-firing “likes” that can be addictive to children. Often, children who earnestly want to turn off the platform can’t; their brains just aren’t old enough to resist addictions to the same degree as adults’.

Advertisement

To maintain growth every quarter, platforms have to find ways to attract and keep more users longer.   If platforms are allowed to continue profiting from technology they know will harm great numbers of children without fear of financial consequences, they will continue to perfect their techniques, and more children will be hurt. The child suicide and mental health crisis we are experiencing now will get worse with no end in sight.

It doesn’t have to be this way. The Google search engine’s method of prioritizing content for viewers, designed to be based on websites’ expertise, authoritativeness and trustworthiness, shows there are ways of deciding who sees what that are far less risky to children — and everyone else.

The court’s decision won’t end the debate over Section 230, but it could begin to restore the law to the original purpose of protecting young people. But it should not be a matter of debate that it should be illegal to knowingly weaponize children’s vulnerabilities against them.

And if we can’t agree on that, anyone who believes the unprecedented harm children are enduring is the price society has to pay for freedom on the internet should at least acknowledge that harm.

Ed Howard is senior counsel at the Children’s Advocacy Institute of the University of San Diego School of Law.

Advertisement