When Machines Behave Badly: The Dark Side of Algorithms in Our Daily Lives

By.

min read

A couple playing with a robotic arm

We live in a world increasingly driven by algorithms. These sets of coded instructions analyze data, compute probabilities, and make automated decisions without human oversight. Algorithms curate our social media feeds, tailor our search results, approve our loans, screen our job applications, and guide self-driving cars. In many ways, they direct our digital experiences and shape our opportunities. But what happens when these all-powerful lines of computer code go wrong?

In the past decade, we’ve seen algorithms make mistakes with painful real-world consequences, exposing their limitations and potential for harm. While often useful, algorithms can encode human biases, fail to generalize properly, and make high-stakes decisions based on incomplete data. As algorithms take on greater roles in society, tech companies and researchers have a responsibility to proactively audit for fairness, steer away from deception and polarization, and ensure transparency. Otherwise, we risk ending up with machines that behave badly.

Algorithmic Bias and Discrimination

One of the most troubling pitfalls of algorithms is their potential to perpetuate or even amplify human biases. While algorithms are designed to be neutral and objective, the data they are trained on often contains messy real-world biases. As a result, algorithms can inherit implicitly discriminatory practices. A striking example was Amazon’s AI recruiting tool. When the engineers tested it, they found the system discriminated against female candidates by penalizing resumes containing words like “women’s” and filtering out graduates of two all-women’s colleges. The algorithm learned from previous hiring data that showed a pattern of male hires over females. Unable to overcome ingrained societal biases, it began reproducing them. Amazon ended up scrapping the tool entirely.

Issues around algorithmic bias have arisen in other high-stakes sectors like banking, healthcare, and criminal justice. Algorithms are increasingly used to make decisions on loan applications, medical diagnoses, and parole eligibility. But problematic data can produce inequitable results. One study found a commercial algorithm widely used in US hospitals was far less likely to refer black patients to healthcare programs compared to equally sick white patients. The algorithm had been trained on data from a healthcare system that historically undertreated black people. Without corrective measures, the discriminatory patterns recurred. While algorithms are capable of ingesting huge amounts of data and detecting subtle patterns, their reasoning does not automatically translate into socially appropriate or ethical judgment. As algorithms take on greater roles in society, we must prioritize fairness and prevent the codification of systemic biases.

Deception, Misinformation, and Filter Bubbles

In addition to discrimination, algorithms can also steer users towards deception, misinformation, and increasingly narrow worldviews. This danger manifests most clearly on social media platforms, where algorithms curate content to maximize engagement.  On sites like Facebook, complex proprietary algorithms analyze endless streams of user data to recommend posts, friends, groups, and ads. Their goal is to keep eyes glued to screens as long as possible. But in doing so, they risk manipulating users and distorting reality.

Several years ago, some criticized Facebook’s algorithms for creating “filter bubbles” and enabling the spread of fake news. By recommending content similar to what users already engaged with, algorithms surrounded people with progressively narrower perspectives. Those skeptical of mainstream news might be recommended increasingly fringe viewpoints, creating a slippery slope towards conspiracy theories. Users were less likely to encounter alternative narratives or fact-checks. Facebook and others have tweaked algorithms to reduce misinformation and broaden exposure. But issues remain. Algorithms designed to maximize engagement continue to promote polarizing, emotive content. Add in financial incentives, and algorithms become potent tools for spreading deception or affronts to truth. While social media companies claim algorithms merely show people what they want, the reality is more complex. Algorithms do not passively observe preferences. They actively shape perspectives and incentives, requiring thoughtful oversight.

Unreliable Predictions and Generalizations

Algorithms can go bad in other ways beyond bias and manipulation. Though they excel at finding patterns in huge datasets, their predictions do not always generalize accurately to new situations. They can make high-stakes decisions based on correlations rather than causation. For example, researchers found that applying off-the-shelf AI algorithms to US healthcare data produced prediction models that depended heavily on how costly past treatments were, rather than just predicting health needs. Essentially, the algorithms linked good health to receiving more expensive interventions, regardless of medical necessity. This could improperly skew treatment plans.

Algorithms have also shown unpredictability in complex real-world environments. Autonomous vehicles rely heavily on algorithms to perceive surroundings, predict behavior, and make split-second navigation decisions. But edge cases outside algorithms’ training data continue to emerge. Crashes still happen due to unexpected circumstances like camera glare, poor weather, or erratic human drivers. While the technology continues improving, it proves challenging to anticipate every scenario algorithms may encounter. No algorithm is infallible. Their reasoning grows brittle when stretched beyond the bounds of training data. While useful tools, algorithms require vigilant monitoring when making high-stakes decisions that impact human lives.

The Need for Transparency and Oversight

Taken together, the challenges around algorithms point to a greater need for transparency and human oversight. But many algorithms today operate as “black boxes” concealed behind corporate secrecy and complexity.  Users typically see only the algorithmic inputs and outputs, not the internals. We know the choices algorithms make but not how or why. This lack of transparency makes it difficult to audit for issues like bias or correct unintended errors. Some argue algorithms should be open and interpretable by design. But balancing transparency with complexity is tricky. Simpler algorithms are easier to explain but less powerful. And perfectly interpreting the reasoning of advanced machine learning algorithms may be impractical.

Regardless of internal explainability, external oversight remains critical. There must be processes allowing for meaningful human review, adjustment and accountability. Individuals should be able to challenge algorithmic decisions that affect them. Policymakers should mandate algorithmic impact assessments for high-risk domains. We cannot take for granted that artificial intelligence will become aligned with social values by default. Achieving fairness, integrity and trust will take diligent governance.

Building a More Ethical Algorithmic Future

Algorithms are not inherently good or evil. They simply manifest the data and goals they are given, which are human constructs. So shaping the ethics of algorithms ultimately requires shaping the ethics of people. Tech companies should prioritize diversity, allowing people from diverse backgrounds to spot potential harms. Ethicists should work alongside engineers to account for social impacts early in development. Users should pay attention to how algorithms influence their thinking, and demand more oversight.

With care, algorithms can augment human capabilities in hugely positive ways. But we must approach implementation mindfully. Algorithmic decisions range from the mundane to the deeply consequential. As algorithms extend their reach into finance, medicine, governance and more, we must remain vigilant. Systems require thoughtful design, continuous monitoring, and accountability measures. Only then can we work towards an algorithmic future that benefits all people equitably. The path will not be easy. There are few simple remedies to complex sociotechnical issues. But with transparency, governance and ethical innovation, perhaps it can be navigated. The potential gains for knowledge, justice and human flourishing make it well worth the journey.

Leave a Reply

Your email address will not be published. Required fields are marked *