When Code Causes Damage: Holding Platforms Accountable for Biased Algorithms

Home > Blog > When Code Causes Damage: Holding Platforms Accountable for Biased Algorithms
When Code Causes Damage: Holding Platforms Accountable for Biased Algorithms

Nov 14, 2025

Algorithms serve as the backbone of social media. Algorithms uniquely shape what every user sees and hears on a platform. The algorithms are built to tailor content to enhance the user experience. The algorithm seeks to please through curated systems that base content off of user demographics, what users have interacted with, and what they have spent time looking at. Algorithms have a powerful impact and do not always function perfectly, in fact they can sometimes perpetuate negative situations. Algorithms are built by engineers who prompt and program them. Overall, most social media companies claim their companies use neutral algorithms that are made to please all users. However, in recent cases such as the antisemitic remarks generated by Grok on X, reveal the dangers and unpredictability of the algorithms. Social media companies should be held accountable to discriminatory impacts caused by their algorithms because they are responsible for the formation, programming, and promoting of the algorithm and therefore must be responsible for preventing harm.

Just as with AI, algorithms are not autonomous entities, but a result of deliberate human design and prompting. Every line of code, every dataset, and every prompt represents a choice made by engineers and company representatives. Algorithms may have partial autonomy once programmed, but they are intentionally designed by humans at their foundation. In the case of Grok on X, engineers programmed the algorithm to base its responses off of existent posts create by users. As mentioned by the Guardian, “You tell it like it is and you are not afraid to offend people who are politically correct” and “Understand the tone, context and language of the post. Reflect that in your response” (Guardian). This programming allowed for the algorithm to pull from users posts that may have been inappropriate. Due to Grok being associated and owned by X, the company assumed responsibility for its outputs. This example shows companies abilities to control algorithms and tailor them for an ideal user experience. X took responsibility for the instance and removed the flawed code that caused inappropriate remarks to occur. These discriminatory remarks occurred as a direct result of the programming by the company. The AI was prompted to reflect users posts which are relatively unregulated. As we can see in this instance, companies cannot claim negligence to this discriminatory and inappropriate language when their systems are responsible. Companies are able to foresee the harmful implications of their algorithms and they should be held responsible for their programming.

While social media algorithms are not humans that can be held directly accountable, their counterparts can be. I believe holding the companies responsible is the best equivalent to ensure best practices on social media to keep it a safe space. Humans are held accountable for discrimination in areas like employment, housing, and credit. Having bias in these areas and harmful language can cause real damage. When algorithms, like X’s with Grok, amplify harmful language, it inflicts comparable harm by normalizing negative and discriminatory language. It feels important, especially as the technology realm is developing, to create accountability to the algorithms that are built by humans. Because they are programmed by humans, it must be held to human standards. Just like a car manufacturer must ensure its parts are safe for the vehicle, an engineer must ensure the limitations of their algorithms to protect users.

On the other side of this argument, individuals may feel that this suppresses expression and innovation. Platforms are shielded from responsibility of user generated content, but there needs to be some accountability in place to ensure the safety of users. The question becomes if the freedom for technological innovation and expression outweighs the need for accountability. Looking through an ethical lens, it’s hard to not want companies to be held responsible for acting in an ethical manner. Having legislation holding companies accountable would not target the individual using the platform's free speech. Instead, it would create guardrails for what is acceptable and true.  Having platforms implement measures for keeping speech within reasonable bounds would create greater public trust that would benefit the individual users and the company. Properly structured bounds would balance technological innovation with essential protections against discriminatory harm that all individuals should possess. Ultimately, social media companies must be held accountable or else their systems would normalize undermining civil rights and would threaten public trust in these platforms.


Casey Bechert, a student in Jon Pfeiffer’s media law class at Pepperdine University, wrote the above essay in response to the following prompt:

“Should social media companies be held legally liable for discriminatory impacts caused by their algorithms?”

Casey is an Integrated Marketing Communications major.

Sources:

https://www.theguardian.com/us-news/2025/jul/12/elon-musk-grok-antisemitic

Sign Up for Pfeiffer Law's Monthly Newsletter

Contact Jon and his team today.

Subscribe