When Algorithms Make Life-Altering Choices

By.

min read

A US F-16 plane armed with AGM-88: source Pixabay

In a small, pre-war apartment in Brooklyn, New York, an alarm clock rings at 7:00am. Its metallic clangs rouse Tanya Green from sleep, beginning her morning routine. As Tanya brushes her teeth, algorithms analyze her Spotify music choices, calibrating suggestions for her afternoon commute. While Tanya dresses for work, algorithms scan incoming emails, highlighting urgent messages in red and filtering out spam. At the corner bodega where Tanya stops for her morning coffee, algorithms tally the charge to her credit card and record her purchase history. So goes a typical day in Tanya’s life, guided and informed by automated systems making decisions without human involvement. From music apps to email filters, credit scores to social media feeds, algorithms now shape many aspects of modern society. Their rise has transformed old systems and created new opportunities, promising convenience and optimization. But increased reliance on algorithmic decision-making has also raised concerns about transparency, fairness, and accountability.

The Origins of Automated Deciding

The word “algorithm” dates back to the Latinized name of the 9th century Persian mathematician al-Khwārizmī. Based on the Arabic word “al-ḵwārizmīyya,” meaning arithmetic or decimal numbers, it described step-by-step procedures for solving mathematical problems. In the 13th century, algorithms appeared in the work of Leonardo Fibonacci, who shared solutions for arithmetic and algebra.  In the 20th century, English mathematician Alan Turing explored the idea of “computable numbers,” laying foundations for modern computing. After World War II, scientists created the first programmable, digital computers, capable of following stored lists of instructions—in other words, algorithms. As computing advanced, algorithms moved beyond pure mathematics into more complex decision-making processes.

Machine learning algorithms, in particular, represented a leap forward in automated decision capabilities. Traditional algorithms follow set steps, producing the same outputs given the same inputs. Machine learning algorithms “learn” by analyzing large datasets, identifying patterns, and adapting their decision processes without explicit programming. This enables nuanced judgments similar to human discretion. Today, algorithms drive an array of routine choices. Recommendation systems utilize behavioral and demographic data to suggest media content, products, even romantic partners. Financial algorithms inform loan decisions, insurance premiums, and stock market trades. Computer vision algorithms can now read handwriting, recognize faces, and detect suspicious activity on security footage. The expansion of automated decision-making is aided by proliferating sensors, wearables, and smartphone apps producing torrents of data—the raw material algorithms use to discern patterns and make predictions.

Automating Criminal Justice

Nowhere has the algorithmic shift been more controversial than in criminal justice systems. Correctional facilities use risk assessment algorithms to predict recidivism rates and guide parole decisions. predictive policing algorithms analyze crime data to target locations for increased police presence. Facial recognition algorithms scan crowds for suspects. These technologies promise greater consistency and neutrality compared to biased human decisions. However, civil rights advocates argue that flawed training data and model design often replicate existing inequities. Studies have found some recidivism algorithms falsely flag black defendants at twice the rate of white defendants.

Life sentences have been based partly on secret evidence from criminal predictive systems. Lack of transparency fuels concerns that defendants are denied due process protections.

Beyond sentencing and policing tactics, some jurisdictions are automating the role of judges in bail and parole hearings. But can machines effectively weigh the nuanced, human aspects of these consequential hearings? Critics emphasize the rights at stake, while proponents argue automated systems increase speed and neutrality. Ongoing research aims to enhance fairness but challenges remain. The intersection of algorithms, criminal justice, and civil rights will continue sparking urgent debate.

Managing Financial Futures

The 2008 financial crisis exposed weaknesses in government market oversight. In response, the 2010 Dodd-Frank Act mandated reforms including increasing automation of trading systems. The intention was to reduce risky behavior through unbiased algorithmic monitoring. A decade later, algorithms execute over half of all financial transactions.

Machine learning programs scan news and social media to gauge market sentiment. Algorithms make split-second trades based on signals and trends humans can’t detect. Automated systems approve or deny credit applications, set interest rates, and manage customer portfolios.

Proponents argue algorithms enhance market stability and widen access to credit. But patterns of racial discrimination have emerged in automated lending decisions. Critics also blame faulty algorithms for market volatility events like the 2010 Flash Crash. Ensuring fairness and stability remains challenging given the complexity of market algorithms. Ongoing oversight seeks to strike a balance between automation’s upsides and downsides. The impacts on market access, racial equity, job loss for human traders, and systemic risks must be weighed. Like criminal justice, financial systems reveal algorithms promising benefits yet also replicating deeply-rooted societal problems. More transparency and accountability mechanisms are widely called for.

Algorithms of War

Beyond domestic systems, algorithms are transforming strategic decision-making in military and intelligence operations. Advanced militaries now deploy lethal autonomous weapons systems able to identify and strike targets without human oversight. While supporters point to their speed and precision, critics warn of diminished accountability and control. Surveillance algorithms empower intelligence agencies to analyze massive datasets, identifying potential threats through pattern recognition. Social media monitoring algorithms can reportedly assess individuals’ likelihood of radicalization based on online activity. Proponents argue this keeps societies safer. However, civil liberties advocates protest the opacity and overreach of these automated tools.

Perhaps the most alarming development is cyber algorithms designed to independently launch offensive hacks and disinformation campaigns against geopolitical adversaries. Unconstrained by diplomacy, such algorithms could independently escalate tensions, critics warn. Their covert nature would also enable governments to maintain deniability about acts of virtual aggression and subversion. As with other contexts, automation promises military and intelligence gains in terms of speed, scale, and consistency. But human judgment and oversight remain essential safeguards. How to govern the use of violent and manipulative algorithms by state actors remains dangerously unclear. Like nuclear weapons, some argue advanced algorithms should be subject to international controls and transparency. As algorithms spread through both civil and martial realms, humanistic wisdom must remain in command. Math cannot replace morality, nor efficiency trump equity. This fundamental truth grows only more important as society becomes increasingly automated across all facets of life. The road ahead will challenge societies to uphold their values as much as optimize their systems.

The Social Costs of Optimization

Beyond core civic systems, algorithms now optimize social experiences for efficiency and profit. Social media platforms rely on engagement algorithms to maximize user time and ad revenues. Users become trapped in filter bubbles as algorithms feed content matching their interests and views. Critics blame this for exacerbating polarization by limiting exposure to alternate perspectives. Dating apps employ algorithms matching users based on shared hobbies, education, politics, and more. This solves the needle-in-a-haystack problem of finding compatible partners. But some argue it discourages understanding across difference. By algorithmically sorting people into tribes, societal cohesion suffers.

Demand-based pricing algorithms adjust costs for flights, concerts, and more based on consumer willingness to pay. Financial software enables split-second trades outpacing human oversight. Each optimization makes systems more effective on its own terms. But the cumulative result may be a less cohesive, equitable society. Algorithms cannot yet replicate human judgment, creativity, ethics, and empathy. Optimizing convenience through automation can disconnect experiences and choices from their broader impacts. More holistic algorithms weighing humanistic factors could balance efficient systems with societal needs.

The Path Forward

From Wall Street to Main Street, algorithms are now indispensable parts of civic and social infrastructure. But as developers pursue optimization, important human values can be left unconsidered. Biases flow freely through mathematics. So do historic injustices and systemic disadvantages. With algorithms playing an increasing role in opportunity, justice, safety, markets, and democracy, society has reached an inflection point. Hard questions lie ahead about who benefits from automation versus who is further marginalized. How can civic values and rights be upheld?

Transparency must increase, enabling outside audits of automated decision systems. Continued research into algorithmic fairness and accountability will be crucial. Given democracy’s core promise of equal treatment under law, extra care should govern algorithm use in criminal justice. And addressing root causes of social problems must be prioritized over simply efficient management. The rise of automated decision-making presents challenges but also opportunities. Thoughtfully developed and applied, algorithms can enhance human capacities for knowledge, justice, and progress. But around present risks, unease emerges. Societal impacts and ethics lag behind technical capabilities. As algorithms become ubiquitous, retaining humanistic sensibilities remains imperative. The path forward requires diligence, openness, and placing people before efficiency.

Leave a Reply

Your email address will not be published. Required fields are marked *