• CCPA & CPRA
  • Privacy
  • Regulations

How Privacy Laws Could Help Regulate Facebook’s Algorithms

read

Congressional testimony from a former Facebook employee has sparked outrage over the governance of the company’s algorithms and has renewed calls for regulation of the social media giant.

Although privacy laws ostensibly focus on data, GDPR in Europe and a set of emerging laws in the U.S. are well-positioned to play an important role in how regulators might govern Facebook’s algorithms. A regulatory action on Facebook could serve as a harbinger for future regulation over media, finance and other technology-driven companies that rely on algorithms and personalization to power their digital experiences.

How Privacy Laws Could Help Regulate Facebook’s Algorithms

The main thrust of the testimony given by Frances Haugen, the former Facebook product manager who shared extensive internal documents with congress, focused on the algorithms used to determine the content users see in their newsfeeds. Haugen’s criticism was simple and familiar: these algorithms, optimized for engagement, have deleterious effects on our individual and collective health — and yet users and regulatory bodies have no insight into, or control of, the decisions embedded in their code.

Modern privacy laws focus primarily on giving users power over the personal or sensitive information organizations collect. However, these laws also provide individuals with rights over the processes and algorithms that put those data into action.

“Privacy laws are not simply about protecting what companies know about you,” said Rick Buck, Chief Privacy Officer at WireWheel.” “They are also about protecting what organizations can do with that information.”

Privacy laws regulate algorithms by granting users rights over two types of activities: data profiling, the creation and segmentation of users based on personal data, and automated decision-making, which often is based on those profiles. These provisions are already active in Europe (GDPR), Brazil (LGPD), and China (PIPL) and will come into effect in California (CPRA), Colorado (CPA) and Virginia (CDPA) in 2023.

What is Data Profiling and Automated Decision-Making — and Why Does it Matter?

Data profiling is broadly defined as “automated processing of personal data to evaluate certain things about an individual.” Profiling is frequently used in digital advertising in which marketers create “profiles” of users (e.g. “high income”, “repeat buyer”) to influence ads, but also supports an array of other non-advertising use cases (e.g. personalization, automated loans)

Automated decision-making refers to “the process of making a decision [based on that profile] by automated means without any human involvement.” These include a range of activities from an online decision to award a loan, and an aptitude test used for recruitment, an ad targeted to a user, and in Facebook’s case, a decision around what we see in our newsfeeds. GDPR grants users the right not to be subject to decisions based solely on automated processing which product “legal, or similarly significant effects.”

A DEEP DIVE INTO AUTOMATED DECISION MAKING

FUTURE PRIVACY FORUM

Our colleagues at the Future of Privacy Forum have put together a deep dive on automated decision making considerations for policy makers. Below is a chart comparing existing requirements around automated decision-making ] between GDPR (Europe), CPRA (California), and CDPA (Virgina.)

These provisions have a potentially massive impact for technology platforms like Facebook. If the effect of the algorithmic decision or profile is deemed “significant,” as Haugen is arguing about Facebook, individuals will have the right to demand an alternative experience in which the content in their newsfeed is not determined by the more advanced algorithm (e.g. potentially, a reverse chronological timeline).

The impact here extends well-beyond social media. A host of other industries including traditional media, digital finance and e-commerce rely on algorithms to sort content, provide recommendations or evaluate customers in what could be considered a “significant” way. A regulatory action against Facebook could lead regulators to pay closer attention to the way algorithms impact user experiences and rights in these areas as well.

“Algorithms are the next frontier of privacy,” says Buck. “Businesses not only need to consider how they might enable users to opt-out or opt-in of ‘personalized’ experiences at scale, but what a viable alternative experience might look like.”