Article: Stephan Lewandowsky et al., “Challenges in Understanding Human-Algorithm Entanglement During Online Information Consumption,” Perspectives on Psychological Science (July 10, 2023). https://journals.sagepub.com/doi/full/10.1177/17456916231180809.

Reviewed by: Brent M. Eastwood

Almost any online activity involves interaction with proprietary algorithms. Whether someone is shopping on Amazon, searching with Google, or watching videos on YouTube, they are confronted with automated programs that recommend taking various actions. These algorithms have mystified even the most accomplished academic researchers who investigate human interaction online and this strain of research “is in its infancy.” Major internet companies keep these programs under close wraps mainly for trade secrets protection, making it difficult to evaluate how they work and when and why they change over time.

Introduction

Stephan Lewandowsky, Ronald E. Robertson, and Renee DiResta explored this algorithmic information environment in an article published in Perspectives on Psychological Science. The authors argued that “increased transparency, more data sharing and greater protections for external researchers examining the algorithms,” is needed to comprehend the growing field of human-algorithmic activity.  

Lewandowsky et al. describe a “coupling” or “entanglement” that is continuously created between humans and online algorithms. In addition, user responses generate an ever-growing number of outputs and data as the algorithm learns to make additional online recommendations in a “feedback loop.” This may keep more people on the platform for longer periods—and many people may not know they are being affected by algorithms. Moreover, users of social media might not even know they are subjected to algorithms.

The authors explain that algorithms may be explicit (e.g., a subscription to a YouTube channel) or implicit (suggestions on TikTok’s “For You Page”). A subscriber can implicitly pick a program on Netflix based on previous viewing and ratings or a shopper can select a book on Amazon based on the rankings of earlier purchases. Then the algorithm updates to optimize future use and increase the time users spend on the application.

Discussion

Spending long periods on an application and interacting with the algorithm can sometimes lead to users viewing problematic or hateful content. The authors focused on YouTube’s video recommendations to illustrate this point. They pointed out that a review of the literature on articles about YouTube found that “14 out of 23 studies implicated the recommender system in facilitating access to problematic content compared to only two that argued the system was not involved.”

This is important because dissemination of content about online radicalization could be better understood through future academic studies on how people become radicalized with YouTube’s problematic videos. People sometimes make “explicit” demands by liking or subscribing to video from extremist influencers. The authors unfortunately do not call for further study into YouTube’s susceptibility for potential radicalization of users and the possibility that the platform can be used to spread misinformation and disinformation. Such an analysis into YouTube’s risk of extremism would strengthen the authors’ current analysis about the coupling between humans and algorithms.

One recommendation that Lewandowsky et al. do make is for more transparency and public knowledge about online algorithms. Currently, analysts and users have little knowledge of the inner workings of algorithms. The authors assert that, without more data, scientists do not understand the crucial societal and psychological behavior associated with social media and algorithms. The authors note that much of what investigators have learned about algorithms comes from whistleblowers, which Lewandowsky et al. believe is insufficient to analyze potential harm from online behavior.

To solve this problem, Lewandowsky et al. call for supporting congressional legislation such as the bipartisan Platform Accountability and Transparency Act (PATA). This legislation would require social media companies to grant access to user data for certain researchers who are approved by the National Science Foundation. The act attempts to alleviate the transparency problem and support the type of research that answers questions about the social science behind human-algorithmic interaction.

One significant issue of the bill is user privacy: Users could lose their personal information and their anonymity. That is why the Facebook studies mentioned above allowed users to choose whether they want to participate by allowing them to answer survey questions voluntarily about their social media habits. Even with this caveat, a data breach could still put user data in jeopardy. Participants in studies should also have protections from identity theft and guarding their personal financial data. They should have the power to control information about themselves.

Other critics of PATA also raise First Amendment concerns. Scholars at the American Enterprise Institute provided congressional testimony in 2022 that mandated transparency is problematic, noting “[b]usinesses in all but a few exceptional lines have constitutional rights against being inspected or searched without warning or reason.”

Moreover, experts from Stanford University voiced concerns about the implications of PATA on the liability protections offered by Section 230 of the Communications Decency Act. In a Lawfare article, these experts expressed that Section 230 should remain intact under transparency legislation like PATA and that “Congress should expressly address—and, in our opinion, reject—any legislative changes that would reduce internet users’ protections.”

Recommendations for new legislation aside, Lewandowsky et al. make a clear case for the study of online algorithms and how important transparency is when examining their effects on users. Additional analysis is needed to better show algorithms’ societal effects concerning some of the most popular web sites in the world. Are online algorithms leading to addiction to major platforms? Does spending more time on various applications due to algorithms lead to this damaging state? These questions can be answered by additional investigations brought about by voluntary—instead of legally mandated—corporate participation in further studies. The authors are on the right track when it comes to calling for better transparency and openness about algorithms that affect billions of users around the globe.  

Conclusion

The algorithms used by social media and e-commerce platforms are often viewed by researchers as an impenetrable black box that stifles academic research. While additional research could be beneficial to understanding and reducing problematic behavior, a legislative proposal to mandate researcher access poses challenges, including concerns with data privacy, First Amendment, and Section 230. Until these shortcomings can be addressed, legislation for researcher access to algorithmic data may prove difficult to implement in an efficacious manner.