<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Algorithmic Society]]></title><description><![CDATA[A community of current and former AI Governance students exploring algorithms in policy and society]]></description><link>https://www.thealgorithmicsociety.com</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 23:50:04 GMT</lastBuildDate><atom:link href="https://www.thealgorithmicsociety.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Elina Halonen]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[algorithmicsociety@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[algorithmicsociety@substack.com]]></itunes:email><itunes:name><![CDATA[Elina Halonen]]></itunes:name></itunes:owner><itunes:author><![CDATA[Elina Halonen]]></itunes:author><googleplay:owner><![CDATA[algorithmicsociety@substack.com]]></googleplay:owner><googleplay:email><![CDATA[algorithmicsociety@substack.com]]></googleplay:email><googleplay:author><![CDATA[Elina Halonen]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[4 structural problems that make AI accountability difficult in government]]></title><description><![CDATA[Introduction: automation as a new actor in public decision-making]]></description><link>https://www.thealgorithmicsociety.com/p/4-structural-problems-that-make-ai</link><guid isPermaLink="false">https://www.thealgorithmicsociety.com/p/4-structural-problems-that-make-ai</guid><dc:creator><![CDATA[Elina Halonen]]></dc:creator><pubDate>Mon, 01 Dec 2025 15:02:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b93d739c-3d2d-4b57-82f8-d4a9961847b7_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Introduction: automation as a new actor in public decision-making</h2><p>AI systems are increasingly embedded in the everyday work of government. They screen applications for public benefits, support decisions in justice and policing, and shape the allocation of healthcare resources. These tools now operate as quiet administrative actors, influencing outcomes that carry real consequences for individuals and communities.</p><p>This raises a central question: when these systems make errors or produce harmful outcomes, who is responsible? Accountability in automated decision-making is often presented as a matter of technical transparency, but the more persistent barriers are legal, institutional, and structural. The sections below outline four problems that shape the accountability landscape and explain why traditional oversight mechanisms struggle to address them.</p><h2>Opacity is often a legal barrier rather than a technical one</h2><p>The difficulty of understanding complex models is well recognised, yet a significant portion of governmental opacity stems from deliberate legal protections rather than inherent technical limits. Many systems used in public administration are supplied by private vendors who rely on intellectual-property and trade-secret protections to prevent disclosure of how their models function.</p><p>The COMPAS recidivism tool is a clear example. Courts and defendants sought to examine how the model reached its conclusions, yet the vendor argued that the model&#8217;s internals were proprietary. This form of commercial secrecy is reinforced by business models that reward the production of complex, closed systems and penalise simplicity or interpretability.</p><p>For governance, the consequence is practical rather than abstract. Decisions that shape a person&#8217;s freedom, access to services, or financial security can be based on systems whose logic cannot be examined by the affected individual, by oversight bodies, or in some cases even by the deploying agency. The &#8220;black box&#8221; becomes a legal construct that limits scrutiny far more effectively than technical complexity.</p><h2>Responsibility is fragmented across an algorithmic supply chain</h2><p>Government AI systems rarely originate from a single actor. They are assembled through layered supply chains involving data providers, model developers, platform infrastructure, and the public agencies that integrate the tool into their operations. In such chains, no single actor has full visibility, leading to what accountability scholars describe as the &#8220;problem of many hands.&#8221;</p><p>This fragmentation creates several issues:</p><ul><li><p>Agencies often cannot trace the provenance or quality of the training data.</p></li><li><p>Vendors may simultaneously hold substantial influence over how systems operate while claiming the limited responsibilities of a subordinate &#8220;data processor.&#8221;</p></li><li><p>Decision-making authority and technical control are unevenly distributed, yet lines of accountability remain anchored in traditional administrative structures that assume singular responsibility.</p></li></ul><p>This produces what some analysts call an &#8220;accountability horizon&#8221;: each actor can only see a limited distance along the chain, and beyond that point responsibility becomes indeterminate. A 2021 report by the Netherlands Court of Audit illustrates the effect. Agencies deploying algorithmic systems could not access documentation needed to evaluate their functioning and were instructed to defer to external suppliers. In such conditions, responsibility drifts rather than concentrates.</p><h2>Oversight bodies lack the capacity to scrutinise algorithmic systems</h2><p>Democratic accountability depends on institutional forums capable of examining and questioning state action. For algorithmic systems, these forums often include municipal councils, courts of audit, ombuds institutions, and national supervisory bodies. Yet many of these institutions are chronically under-resourced and lack the technical expertise needed to evaluate automated systems.</p><p>Several recurring constraints appear:</p><ul><li><p><strong>Financial limits:</strong> Some local audit offices operate on budgets that cannot support even a single in-depth technical investigation.</p></li><li><p><strong>Expertise gaps:</strong> Many oversight bodies struggle to interpret model documentation, data flows, or system behaviour.</p></li><li><p><strong>Misplaced boundaries:</strong> Algorithmic design choices are frequently framed as purely technical matters, which leads forums to underestimate their relevance to democratic oversight.</p></li></ul><p>When oversight organisations cannot interrogate or contest a system&#8217;s functioning, the effect is a widening accountability deficit. Government agencies may use systems that shape significant administrative decisions without independent verification of accuracy, fairness, or compliance with legal standards.</p><h2>The &#8220;human in the loop&#8221; does not guarantee meaningful control</h2><p>A common safeguard proposed for automated decision-making is the inclusion of a human reviewer who must validate or override algorithmic outputs. While attractive in principle, this safeguard often falters in practice. The core issue is automation bias: the well-documented tendency for people to defer to algorithmic recommendations even when they have the authority to reject them.</p><p>In high-volume or highly technical contexts, human reviewers may lack the information needed to critically assess an output. When the model&#8217;s reasoning is inaccessible, the reviewer&#8217;s role can degrade into a procedural formality. The presence of a human becomes a symbolic assurance of oversight rather than a substantive one.</p><p>This creates a dual problem. The appearance of human judgement is maintained, which can shield the system from scrutiny, but the substantive capacity for intervention is weakened. The final decision is nominally human yet materially governed by the system&#8217;s recommendations.</p><h2>Conclusion: accountability requires institutional redesign, not just technical fixes</h2><p>The difficulties outlined above reveal that challenges in algorithmic governance arise less from exotic technical problems and more from familiar administrative dynamics: commercial secrecy, diffusion of responsibility, resource constraints, and cognitive biases that affect decision-making. Addressing these problems requires institutional change rather than simply improving model transparency.</p><p>Governments will need procurement rules that prevent secrecy from undermining public oversight, clearer allocations of responsibility across supply chains, stronger investment in supervisory bodies, and decision-making environments where human review is feasible and informed rather than nominal. As automated systems become part of the underlying structure of public administration, accountability will depend on how institutions adapt, not only on how models are built.</p>]]></content:encoded></item><item><title><![CDATA[The quiet failure of AI in government]]></title><description><![CDATA[The promise and the reality of algorithmic government]]></description><link>https://www.thealgorithmicsociety.com/p/the-quiet-failure-of-ai-in-government</link><guid isPermaLink="false">https://www.thealgorithmicsociety.com/p/the-quiet-failure-of-ai-in-government</guid><dc:creator><![CDATA[Elina Halonen]]></dc:creator><pubDate>Mon, 01 Dec 2025 15:00:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1974239c-9e89-47ac-b72c-f00c5328d517_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Across the world, governments are adopting artificial intelligence systems to manage welfare, policing, taxation, and other core administrative tasks. The promise is familiar: faster processing, lower costs, and decisions that are more consistent and less biased than human judgement. An algorithm that can process thousands of claims a day or flag potential fraud cases appears to offer a cleaner, more objective public sector.</p><p>What is emerging in practice is more complicated. Rather than solving entrenched social problems, many deployments of AI are entrenching them in new, less visible ways. Systems presented as neutral are amplifying longstanding inequalities; models sold as accurate are built on unreliable data; tools marketed as &#8220;smart&#8221; are deployed without basic evaluation. The issue is not a handful of technical glitches. It is a pattern of design and governance choices that undermines fairness and accountability.</p><p>The sections below outline five recurring problems in how AI is used in public administration and why they matter for anyone interested in digital governance.</p><h2>Automation scales bias instead of removing it</h2><p>A central claim about AI in government is that mathematical models can remove prejudices from decision-making. Algorithms are presented as neutral mechanisms that apply the same rules to everyone.</p><p>In practice, models trained on historical data learn the patterns and biases embedded in that data. The well-known Amazon recruitment case illustrates the mechanism: a hiring model trained on CVs from a male-dominated workforce learned to penalise terms associated with women and downgraded applicants from women&#8217;s colleges. Nothing in the code explicitly instructed it to do so; the pattern was inherited from the past.</p><p>Public-sector systems face the same dynamic. When historical records reflect discriminatory policing, unequal access to services, or biased enforcement, models built on that data will reproduce those patterns. The result is not the removal of bias but its codification into rules that appear objective and are applied consistently and at scale. Where individual bias is at least visible and contestable, algorithmic bias can be harder to detect and easier to defend as &#8220;just what the data says.&#8221;</p><h2>Government models are trained on &#8220;dirty data&#8221;</h2><p>AI systems depend on the quality of the data they are trained on. In many public-sector contexts, that data is compromised in ways that go far beyond occasional errors. Researchers describe this as &#8220;dirty data&#8221;: datasets that embed the effects of unlawful practices, organisational incentives, and systematic recording problems.</p><p>Examples include:</p><ul><li><p><strong>Manipulated statistics:</strong> crime numbers altered to meet performance targets or political expectations.</p></li><li><p><strong>Unlawful enforcement practices:</strong> records generated by unconstitutional stops, discriminatory checks, or other illegal actions.</p></li><li><p><strong>Large-scale recording errors:</strong> serious offences misclassified as minor ones, missing entries, and inconsistent coding.</p></li></ul><p>If these records form the basis for predictive policing, risk scoring, or resource allocation, the model learns a distorted picture of reality. Instead of forecasting future harm or need, it learns where institutions have chosen to act in the past. The system then presents those inherited distortions as neutral predictions.</p><p>For governance, this is a basic epistemic problem: when the input is structured by institutional failure, there is no technical fix that can make the output objective.</p><h2>Predictive systems can lock institutions into self-fulfilling loops</h2><p>Dirty data becomes more damaging when it is fed back into the system through repeated use. Predictive tools in areas like policing or welfare risk scoring can create self-reinforcing cycles.</p><p>A standard pattern in predictive policing looks like this:</p><ol><li><p>A model is trained on historical arrest data that reflects over-policing in particular neighbourhoods.</p></li><li><p>The model flags those neighbourhoods as high risk.</p></li><li><p>Police are deployed there more frequently.</p></li><li><p>Increased presence produces more recorded offences, especially for minor infractions.</p></li><li><p>New data confirms the model&#8217;s original assessment, justifying further deployments.</p></li></ol><p>Over time, the system becomes very good at predicting where police will make arrests, not where crime is most prevalent. The model&#8217;s apparent accuracy is a product of its own influence on institutional behaviour.</p><p>This matters for AI governance because it shifts the role of models from tools that inform decisions to mechanisms that stabilise particular practices. It becomes harder for agencies to change course, even when communities bear the cost of intensified scrutiny.</p><h2>People are judged through weak proxies and arbitrary signals</h2><p>Many algorithmic systems in government assess concepts that are hard to measure directly, such as &#8220;risk of fraud,&#8221; &#8220;likelihood of recidivism,&#8221; or &#8220;need for care.&#8221; Developers therefore reach for proxies: data points that are easier to record and quantify.</p><p>When proxies are poorly chosen, they encode arbitrary or unjust distinctions. The example of a system using possession of a trailer, a diesel car, or a wheelie bin as indicators of suspicious behaviour shows how far this can drift from any meaningful concept of risk.</p><p>In health, one widely discussed case involved a tool that used healthcare expenditure as a proxy for healthcare need. Because marginalised groups have historically received less care and funding, their costs were lower. The model inferred that they had lower need and recommended fewer resources for them. Underinvestment in care was translated into an apparently neutral signal of reduced risk.</p><p>These design choices matter because they determine who receives attention, scrutiny, or support. People are judged not on their circumstances or actions but on correlations that may have little to do with the underlying policy goals. Once embedded in software, these proxies are hard to contest and are often shielded by claims of commercial confidentiality.</p><h2>Procurement is driving adoption faster than oversight can keep up</h2><p>The rapid spread of AI in public administration is not primarily driven by internal technical capacity. It is driven by procurement. Vendors offer ready-made solutions to agencies under pressure to modernise and to demonstrate that they are &#8220;data-driven.&#8221; The result is a market where ambitious claims often outrun evidence.</p><p>Public administrations frequently face a skills gap: limited internal expertise in machine learning, data governance, or model evaluation. Vendors, by contrast, specialise in presenting their products as powerful, safe, and innovative. This creates an asymmetry of expertise in which governments are strongly reliant on the seller&#8217;s own assurances.</p><p>Recent work by the Netherlands Court of Audit gives a concrete sense of how this plays out. It found that many agencies did not systematically assess the risks of the AI systems they were using and often did not know whether those systems worked as intended. In other words, tools that influence who is investigated, who is paid, or who is flagged as a risk can operate for extended periods without basic performance or impact evaluation.</p><p>For democratic governance, this raises a direct accountability question: when neither the public nor the administrators can say how a system behaves or whether it is meeting its objectives, on what basis is its continued use justified?</p><div><hr></div><h2>Conclusion: governing systems, not just deploying them</h2><p>The main risks of AI in government do not lie in speculative superintelligence. They emerge from very ordinary factors: biased and unreliable data, weak proxies, self-reinforcing feedback loops, and procurement processes that prioritise adoption over evaluation.</p><p>These are governance problems. They cannot be solved by adding another layer of technical sophistication. They require decisions about where automation is appropriate, which tasks demand inherently interpretable systems, what data is acceptable to use, and how oversight is enforced.</p><p>If AI is to have a legitimate role in public administration, it needs to be embedded in institutions that can explain, contest, and, when necessary, switch off the systems they use. Automation for its own sake is not a public good. Automated decisions only serve the public interest when the underlying models, data, and incentives are subject to the same scrutiny we expect of any other exercise of state power.</p>]]></content:encoded></item><item><title><![CDATA[BOOK REVIEW: More Everything Forever and the Architecture of Silicon Valley’s Future Myths]]></title><description><![CDATA[Adam Becker&#8217;s More Everything Forever examines a set of ideas that shape how parts of the technology sector imagine the future.]]></description><link>https://www.thealgorithmicsociety.com/p/book-review-more-everything-forever</link><guid isPermaLink="false">https://www.thealgorithmicsociety.com/p/book-review-more-everything-forever</guid><dc:creator><![CDATA[Elina Halonen]]></dc:creator><pubDate>Mon, 01 Dec 2025 14:43:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8464e271-a22c-4871-904b-582a95b640a4_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Adam Becker&#8217;s <em>More Everything Forever</em> examines a set of ideas that shape how parts of the technology sector imagine the future. Becker is not interested in the everyday discourse that surrounds artificial intelligence or automation. His focus is the intellectual ecosystem around Silicon Valley&#8217;s most influential founders, funders, and thinkers&#8212;the communities that frame AI development through long time horizons, exponential curves, and civilisational stakes. The book works as an excavation of the conceptual scaffolding that underpins these narratives.</p><p>Becker identifies a worldview organised around abstraction. Future-oriented calculations replace present-day harm. Long-range expected-value equations stand in for ethics. Engineering metaphors substitute for political institutions. Across different domains&#8212;moral philosophy, AI speculation, space colonisation&#8212;Becker finds the same pattern: complex realities are stripped down to simplified systems that scale cleanly on paper and fail when confronted with the world as it is.</p><h2><strong>Longtermism and the arithmetic of the distant future</strong></h2><p>One of Becker&#8217;s central targets is longtermism, a philosophy that prioritises the welfare of hypothetical future beings over people alive today. Becker&#8217;s critique is not simply that the focus is distant; it is that the reasoning depends on multiplying immense hypothetical populations by extremely small probabilities. This produces conclusions where speculative research on far-future technologies can be deemed more valuable than improving the lives of billions of people in the present.</p><p>Becker highlights how this line of reasoning can tilt moral attention away from existing inequality. By design, the calculus makes present harms mathematically negligible. The problem, for Becker, is not the idea that future people matter; it is that certain abstractions allow present suffering to disappear from view entirely.</p><h2><strong>The Singularity and the promise of accelerating returns</strong></h2><p>A second component of the worldview Becker analyses is the conviction that exponential technological growth will soon produce artificial general intelligence or &#8220;superintelligence.&#8221; This belief rests on the assumption that current trends&#8212;most notably Moore&#8217;s Law&#8212;will continue indefinitely.</p><p>Becker counters this with empirical observations familiar to researchers: semiconductor scaling is slowing, research productivity is falling in many fields, and biological intelligence cannot be reduced to a neat digital analogy. These points are not novel in themselves, but Becker uses them to show how the narrative of inevitable acceleration persists even when material constraints suggest a more complex picture. It is the persistence of the abstraction, not the details of the trend line, that interests him.</p><h2><strong>Existential risk and the displacement of present harms</strong></h2><p>Becker also traces how certain elite groups have made &#8220;AI existential risk&#8221; a central organising story. The well-known paperclip thought experiment functions here as an emblem: a simplified model of an artificial agent pursuing a goal without regard for human welfare.</p><p>Becker&#8217;s concern is the opportunity cost of attention. While researchers such as Timnit Gebru, Emily Bender, and others document present harms&#8212;carbon costs, linguistic biases, discriminatory outputs&#8212;public debate is repeatedly pulled toward speculative future catastrophes. Meredith Whittaker is quoted to good effect: when the problem is framed in a fantasy world, the solutions migrate there as well. Becker shows how existential narratives can crowd out governance discussions grounded in material impacts.</p><h2><strong>Utopias built on controlled environments</strong></h2><p>Becker then turns to the futures envisioned by figures like Ray Kurzweil and Elon Musk. These imagined worlds involve space settlements, large-scale planetary engineering, or the conversion of matter into computational substrate. Becker&#8217;s point is not to dismiss ambition but to highlight a recurring preference for controlled, predictable environments over the complexities of natural systems. The result is a form of utopian thinking in which nature becomes either a problem to escape or a resource to reorganise for technical ends.</p><p>The critique here is structural: a worldview built on simplifying assumptions can lead to futures that are inhospitable to the very forms of life they aim to preserve.</p><h2><strong>The intellectual lineage behind the rhetoric</strong></h2><p>Becker closes by mapping some of the contested intellectual history surrounding these communities: the rationalist and effective altruist movements, their entanglements with controversial claims about intelligence, and the echoes of earlier techno-futurist manifestos. His argument is that when a worldview is constructed from highly abstract models of humanity, it can become vulnerable to reductionist conclusions about people themselves. The point is not that the movements are uniform&#8212;they are not&#8212;but that certain baseline assumptions can create openings for ideas that flatten human complexity.</p><h2><strong>A book for governance discussions, not just tech criticism</strong></h2><p><em>More Everything Forever</em> is not an attack on technology or future-thinking. It is a study of how certain influential groups reason about the future and how those modes of reasoning scale into public discourse. For students of AI governance, the value of the book lies in its account of how narratives take shape: how specific epistemic communities define risks; how simplified models travel into policy debates; and how speculative futures can overshadow present harms.</p><p>Silicon Valley&#8217;s most ambitious visions are not only technical blueprints but political imaginaries. Becker&#8217;s book is useful for understanding the assumptions that sit underneath them&#8212;and for recognising that the governance of algorithmic systems requires grounding decisions in empirical harms rather than in abstractions that treat complexity as a nuisance to be engineered away.</p>]]></content:encoded></item><item><title><![CDATA[The Limits of Explainable AI]]></title><description><![CDATA[Why Explanations Fall Short of Accountability]]></description><link>https://www.thealgorithmicsociety.com/p/the-limits-of-explainable-ai</link><guid isPermaLink="false">https://www.thealgorithmicsociety.com/p/the-limits-of-explainable-ai</guid><dc:creator><![CDATA[Elina Halonen]]></dc:creator><pubDate>Mon, 01 Dec 2025 14:37:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/103fe331-c3ea-45c3-90a9-26db5b93d5e5_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As algorithmic systems take on roles once reserved for human officials&#8212;evaluating loan applications, assigning welfare risk scores, screening job candidates&#8212;the desire for transparency has moved from a technical preference to a democratic requirement. People want to understand why an automated system reached a particular outcome, especially when that outcome is consequential. The idea of a &#8220;black box,&#8221; in which even developers cannot fully articulate a model&#8217;s internal logic, is a source of public anxiety and institutional discomfort.</p><p>Explainable AI (XAI) emerged as the field&#8217;s attempt to answer this problem. It promises a way to make complex systems intelligible: a translation layer, a peek inside the box, a path toward accountability. In public debate, XAI is often presented as the bridge between advanced computation and human oversight.</p><p>Yet the practical reality is more complicated. Much of what is commonly believed about explainability does not hold up when examined through a governance lens. The gap between what XAI offers and what accountability requires is wide, and in some cases unbridgeable. Understanding that gap is essential for anyone working on AI governance, procurement, or oversight.</p><h2><strong>The accuracy&#8211;interpretability trade-off is overstated</strong></h2><p>A persistent assumption in AI is that there is an unavoidable trade-off between model accuracy and interpretability. The story goes like this: simple models can be understood but are too weak for high-stakes tasks; powerful models are necessarily opaque. Choosing transparency, therefore, means accepting inferior performance.</p><p>Empirical research undermines this assumption. Work by Cynthia Rudin and others shows that for many structured, high-stakes domains, inherently interpretable models often perform as well as the black-box alternatives. Transparency does not always require sacrificing accuracy.</p><p>The belief in this trade-off persists partly because it supports the commercial interests of proprietary vendors. A model that cannot be disclosed is commercially valuable. Declaring opacity an unavoidable by-product of performance becomes a way to justify secrecy. Framing interpretability as technically impossible masks a simpler reality: sometimes secrecy is a business choice, not a scientific constraint.</p><h2><strong>&#8220;Gaming the system&#8221; is often an argument against scrutiny, not a real risk</strong></h2><p>Another common objection to transparency is the fear that revealing how a system works will allow people to manipulate it. This argument is frequently invoked by public agencies and vendors to justify withholding information about model design.</p><p>From a governance perspective, the claim is weak. If a system can be easily &#8220;gamed,&#8221; the problem lies with its design, not with transparency. In a well-specified model, gaming should correspond to socially meaningful improvements&#8212;raising a credit score, improving punctuality in attendance data, correcting inaccurate administrative records. If gaming produces harmful distortions, then the model is relying on proxies that were inappropriate from the outset.</p><p>Moreover, many features used in real-world models are immutable. A person cannot change their age, past interactions with the justice system, or most administrative data. The argument that transparency invites gaming is often a way to avoid external scrutiny rather than a serious statement about risk.</p><h2><strong>XAI explanations are approximations, not the model&#8217;s actual reasoning</strong></h2><p>The most important limitation of XAI is conceptual. Explanations generated by XAI tools are not windows into the original model. They are simplified second-order models designed to mimic the behaviour of the underlying system within a narrow context.</p><p>This means that explanations are approximations by design. They can be informative in cooperative settings&#8212;debugging, model development, internal review&#8212;but they are not faithful representations of how a complex model actually reaches its decisions. A 90-percent-accurate explanation still fails 10 percent of the time. For oversight of high-stakes public decisions, that level of uncertainty is untenable.</p><p>When the explanation is not the truth, the system cannot reasonably claim to be transparent. And if the explanation cannot be trusted, the underlying model cannot be trusted either.</p><h2><strong>Explainability fails precisely in the contexts where accountability is required</strong></h2><p>The usefulness of XAI depends on context. In cooperative settings&#8212;where developers and auditors share a common goal&#8212;XAI is a valuable tool. It helps identify errors, refine features, and diagnose performance issues.</p><p>Accountability settings are different. When an individual contests a decision made by a public agency or a private institution, the relationship becomes adversarial. In this context, the provider of the system has an incentive to select an explanation that portrays the model&#8217;s behaviour as consistent and defensible. XAI gives providers significant discretion to shape how the system&#8217;s decision is presented, which allows narrative control rather than genuine transparency.</p><p>This is the core governance problem: explanations can be curated. Disclosure cannot.</p><h2><strong>Explainability shifts power toward providers and away from the public</strong></h2><p>The rise of XAI reflects a broader shift in how transparency is defined. Instead of demanding access to the system itself&#8212;its code, its training data, its parameters&#8212;institutions increasingly rely on &#8220;reasoned transparency&#8221;: a mediated explanation prepared by the system&#8217;s creator.</p><p>This shift matters. Mediated explanations allow organisations to maintain secrecy while still claiming openness. They can select the level of detail, frame the logic of the system, and shape public understanding. The result is a form of managed transparency that risks manufacturing trust rather than earning it.</p><p>For public accountability, this is inadequate. Oversight requires the ability to interrogate a system directly, not accept a curated summary.</p><h2><strong>Conclusion: building transparency into the system, not adding it after the fact</strong></h2><p>Explainability has legitimate uses in technical practice, but it is not a substitute for genuine transparency. For high-stakes decisions in public administration or essential services, accountability cannot depend on post-hoc narratives.</p><p>A more robust approach requires demanding interpretability by design. Public authorities can mandate inherently transparent models in procurement, develop models in-house where possible, and require full disclosure when private vendors supply decision systems. These are governance choices, not technical impossibilities.</p><p>Explainable AI may illuminate parts of a black box, but it cannot solve the accountability problem created by the box itself.</p>]]></content:encoded></item><item><title><![CDATA[Why AI Risk Looks Different in Different Places]]></title><description><![CDATA[Public debates about artificial intelligence are often presented as if the world is talking about one shared problem.]]></description><link>https://www.thealgorithmicsociety.com/p/why-ai-risk-looks-different-in-different</link><guid isPermaLink="false">https://www.thealgorithmicsociety.com/p/why-ai-risk-looks-different-in-different</guid><dc:creator><![CDATA[Elina Halonen]]></dc:creator><pubDate>Mon, 01 Dec 2025 14:28:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/be99124e-adb6-4552-b8fa-fa8833177812_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Public debates about artificial intelligence are often presented as if the world is talking about one shared problem. Policy briefs speak of &#8220;global challenges,&#8221; and news reporting treats AI risk as a universal concern. Yet risk is not a neutral category. Societies interpret technological danger through the political, institutional, and cultural frameworks they already use to make sense of the world. What counts as a threat in one region can seem speculative or marginal in another.</p><p>This matters for AI governance because policymaking does not occur in an abstract vacuum. Regulatory frameworks, public expectations, and media attention are shaped by different conceptions of what the central problems are. Understanding these variations is part of understanding the &#8220;algorithmic society&#8221; itself.</p><h2><strong>The United States: existential stakes and agentic systems</strong></h2><p>In the United States, AI risk is frequently framed in existential or agentic terms: runaway optimisation, superintelligent systems, and the possibility that a misaligned artificial agent could overpower human decision-making. These ideas are not broadly distributed across American society; they are amplified by a small number of influential epistemic communities clustered around Silicon Valley, Effective Altruism, rationalist forums, and high-visibility founders.</p><p>These groups are disproportionately represented in US media coverage. Their interviews, op-eds, and philanthropic influence make AGI-centric concepts&#8212;superintelligence, catastrophic misalignment, extinction risk&#8212;more visible than their actual prevalence in technical research or public opinion. Mainstream outlets often quote these voices when covering AI developments, even when they aim to question or critique them.</p><p>The result is a public discourse where &#8220;AI risk&#8221; often defaults to questions about AGI, automation run amok, or catastrophic futures. This framing shapes what policymakers think the public is concerned about and what kinds of solutions are deemed urgent. Safety labs, model evaluations, and alignment research dominate headlines, while governance debates about labour, surveillance, or platform power receive less attention relative to their material impact.</p><h2><strong>Europe: governance failures and institutional harms</strong></h2><p>European discussions follow a different trajectory. AI risk is most often described as a problem of governance: bias, discrimination, misleading outputs, opacity, surveillance, labour disruption, and threats to democratic institutions. The EU&#8217;s regulatory tradition anchors this framing. Discussions about AI naturally link to the rule of law, due process, rights protection, and institutional safeguards.</p><p>The EU AI Act is a reflection of this approach. Rather than imagining AI as an autonomous agent that might &#8220;break free,&#8221; EU institutions focus on the systems people already use: workplace monitoring tools, biometric identification, recommender systems, automated decision-making in public services. European media coverage follows the same pattern. The dominant risks are structural, not existential.</p><p>This contrast does not imply that Europeans are unconcerned about future-oriented scenarios. Instead, the public discourse prioritises harms that are already observable or foreseeable within existing political and economic structures. Where US narratives often focus on systems gaining too much autonomy, European narratives focus on institutions holding too little accountability.</p><h2><strong>Why these differences matter for AI governance</strong></h2><p>These divergences have practical implications.</p><p>First, they shape policy agendas. If US debate centres existential risk, policymakers become more sensitive to catastrophic-scenario arguments. If European debate centres governance failures, regulation emphasises transparency, oversight, and institutional safeguards.</p><p>Second, they affect how governments and regulators interpret industry claims. Companies based in the US increasingly frame their products as powerful enough to require specialised &#8220;alignment&#8221; processes. European regulators, by contrast, ask for documentation, compliance assurance, and risk assessments. Each region is responding to a different definition of the problem.</p><p>Third, these narratives influence the development of the systems themselves. Training data, model evaluations, safety protocols, and risk registers embed assumptions about what counts as harm. If developers in different regions internalise different narratives, the resulting systems may handle uncertainty and alignment differently.</p><p>Fourth, cross-regional collaboration becomes harder when actors mean different things by &#8220;AI safety.&#8221; Without a shared conceptual baseline, international governance efforts can drift or stall. Interoperability is not only a technical question but a discursive one.</p><h2><strong>Why students and practitioners should pay attention</strong></h2><p>For those working in AI governance&#8212;or preparing to&#8212;these narrative differences are not academic details. They shape:</p><ul><li><p>how policymakers justify interventions</p></li><li><p>which risks receive funding and regulatory traction</p></li><li><p>how companies frame compliance obligations</p></li><li><p>what the public believes is at stake</p></li><li><p>what kinds of expertise are considered relevant</p></li></ul><p>Understanding regional framings helps future practitioners navigate transatlantic policy conversations, evaluate claims made by developers, and interpret risk assessments with more precision. It also highlights the need to distinguish between material harms and narrative-driven concerns.</p><p>As algorithmic systems become embedded in public administration, security, welfare provision, and everyday life, governance will require attention to both the technical system and the discursive environment that defines its risks.</p><h2><strong>Toward a grounded understanding of AI risk</strong></h2><p>AI governance is often described as a global challenge, but the underlying narratives are far from uniform. Recognising these differences is the first step toward building governance structures that respond to real harms rather than imported imaginaries.</p><p>For students, researchers, and practitioners in the Algorithmic Society community, this is an opportunity to rethink what &#8220;AI risk&#8221; means in practice and how the stories we tell shape the systems we build and regulate.</p>]]></content:encoded></item><item><title><![CDATA[Why 'Fair' AI Is Mathematically Impossible ]]></title><description><![CDATA[We are increasingly turning to algorithms to make critical decisions about our lives.]]></description><link>https://www.thealgorithmicsociety.com/p/why-fair-ai-is-mathematically-impossible</link><guid isPermaLink="false">https://www.thealgorithmicsociety.com/p/why-fair-ai-is-mathematically-impossible</guid><dc:creator><![CDATA[Elina Halonen]]></dc:creator><pubDate>Mon, 01 Dec 2025 14:27:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a4b19ba7-351c-4f2e-a237-ce475633c5d0_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are increasingly turning to algorithms to make critical decisions about our lives. From who gets a loan to who gets a job, and even who might be a high risk for reoffending, automated systems are becoming the new arbiters of opportunity and justice. This shift is often fueled by a core belief: that algorithms, unlike humans, are objective, impartial, and fundamentally fairer. We trust the code to be free of the messy, irrational biases that plague human judgment.</p><p>But this belief, while comforting, is built on a fragile foundation. The concept of &#8220;algorithmic fairness&#8221; is not a simple technical problem to be solved with better code. It is a minefield of mathematical paradoxes, unavoidable trade-offs, and deep ethical questions. The pursuit of a perfectly fair algorithm often leads us to uncomfortable conclusions about our technology and ourselves.</p><p>This article will explore four surprising truths about AI fairness that challenge our common assumptions. These are not edge cases or theoretical worries; they are fundamental constraints and realities that anyone building, deploying, or being judged by an algorithm needs to understand.</p><p><strong>Perfect Fairness Is Mathematically Impossible</strong></p><p>The single most startling revelation from research into algorithmic fairness is this: under real-world conditions, a model cannot satisfy all desirable fairness metrics at the same time. This isn&#8217;t a failure of engineering; it is a mathematical certainty known as an &#8220;impossibility theorem.&#8221;</p><p>To understand why, we need to look at what we mean by &#8220;fair.&#8221; There are many different mathematical standards. For example, we might want a model to satisfy:</p><ul><li><p><strong>Statistical Parity:</strong> This standard requires that the proportion of individuals receiving a positive outcome (like a loan approval) is the same across all groups. It focuses purely on equalizing outcomes.</p></li><li><p><strong>Predictive Parity:</strong> This standard requires the model to be equally accurate for all groups. If the algorithm predicts someone will succeed, the probability of them actually succeeding should be the same regardless of their race or gender.</p></li><li><p><strong>Equalized Odds:</strong> This standard requires the model to make the same <em>types</em> of errors for all groups. It ensures that the rate of false positives (incorrectly flagging someone as high-risk) and false negatives (incorrectly flagging someone as low-risk) is equal across different demographic groups.</p></li></ul><p>Each of these sounds like a reasonable goal. The problem is that when two common conditions are met&#8212;<strong>(1)</strong> the underlying &#8220;base rates&#8221; (the actual prevalence of an outcome) differ between groups, and <strong>(2)</strong> the model is not 100% accurate&#8212;it is mathematically impossible to satisfy Predictive Parity and Equalized Odds at the same time. You are forced to choose. This is profoundly impactful because it means there is no single, technically &#8220;correct&#8221; fair solution. The pursuit of fairness requires choosing which <em>type</em> of fairness to prioritize, and by extension, which type of unfairness we are willing to accept.</p><p><strong>The COMPAS Paradox: Biased Against Everyone, Differently</strong></p><p>The mathematical trade-offs of fairness are not just theoretical. A famous 2016 ProPublica analysis of the COMPAS algorithm, a tool used to predict the likelihood of a defendant reoffending, provides a stark real-world example. The investigation found significant racial disparities in the algorithm&#8217;s errors, but in a surprisingly complex way.</p><p>The analysis revealed two different kinds of bias working in opposite directions:</p><ul><li><p><strong>False Positives:</strong> The algorithm was twice as likely to incorrectly classify Black defendants <em>who would not reoffend</em> as &#8220;high risk.&#8221; This error occurred for 45% of these Black defendants, compared to only 23% of White defendants.</p></li><li><p><strong>False Negatives:</strong> The algorithm was more likely to incorrectly classify White defendants <em>who would go on to reoffend</em> as &#8220;low risk.&#8221; This error occurred for 48% of these White defendants, compared to 23% for Black defendants.</p></li></ul><p>This finding is crucial. The algorithm wasn&#8217;t just &#8220;biased against Black defendants&#8221; in a simple sense. It was also biased against White defendants, but by making the opposite kind of mistake. It disproportionately harmed one group by over-predicting risk and harmed another by under-predicting it. This powerfully illustrates the trade-off: a system can be biased against different groups in different, and sometimes conflicting, ways.</p><p><strong>The Mirror Problem: When an Algorithm Is Unfair Because It&#8217;s Too Accurate</strong></p><p>We often assume that algorithmic bias is a technical glitch&#8212;a result of &#8220;bad data&#8221; or a poorly designed model that needs fixing. While technical flaws like <strong>Measurement Bias</strong> can certainly exist (for example, using healthcare spending as a proxy for health needs, which systematically underrates the needs of Black patients), a deeper and more challenging problem is <strong>Societal Bias</strong>. This occurs when an AI model is technically accurate and makes correct predictions based on historical data, but its use still perpetuates and amplifies existing social inequalities.</p><p>A chilling example of this is the 2020 Ofqual algorithm in the UK, designed to assign A-level grades after exams were canceled due to the pandemic. The algorithm was built to match historical grade distributions, accurately reflecting that certain schools consistently produced lower grades. However, this &#8220;accuracy&#8221; meant it unfairly capped the marks of high-performing students at historically disadvantaged schools, effectively punishing them for their school&#8217;s past performance. The algorithm held up a perfect mirror to historical inequality, and in doing so, perpetuated it.</p><p>If the data used to train a model reflects a world of discrimination, an &#8220;accurate&#8221; algorithm will learn to reproduce those same patterns, imposing a relative disadvantage based on group membership.</p><p>&#8220;[D]iscrimination consists of acts, practices, or policies that impose a relative disadvantage on persons based on their membership in a salient social group.&#8221;</p><p>The implication is profound. In these cases, the primary challenge is not debugging the algorithm. It is confronting the fact that our algorithms are showing us an uncomfortable, but accurate, picture of a biased world. Fixing the problem isn&#8217;t about correcting the mirror; it&#8217;s about addressing the reality it reflects.</p><p><strong>The Real Choice: Preserve the Past or Transform the Future?</strong></p><p>When we decide to mitigate unfairness in an algorithm, we face a fundamental choice not between technical methods, but between two competing philosophies of justice: <strong>Formal Equality</strong> and <strong>Substantive Equality</strong>.</p><ul><li><p><strong>Bias Preserving</strong> measures are the algorithmic expression of <em>Formal Equality</em>. This philosophy, rooted in the idea of &#8220;treating like cases alike,&#8221; aims to ensure a process is applied consistently. An algorithm designed with this goal in mind seeks to match error rates across groups, essentially accepting the existing distribution of outcomes in society as a neutral starting point. It works to prevent the algorithm from introducing <em>new</em> bias, thereby preserving the societal status quo, inequalities and all.</p></li><li><p><strong>Bias Transforming</strong> measures are the expression of <em>Substantive Equality</em>. This philosophy argues that true fairness requires proactively accounting for historical disadvantage to &#8220;level the playing field.&#8221; An algorithm built on this principle actively works to counteract socially unacceptable disparities. For example, it might enforce Statistical Parity, ensuring different groups receive positive outcomes (like a loan approval) at the same rate. This approach is designed to transform, not just reflect, the existing social order.</p></li></ul><p>This is not a technical choice between two competing algorithms; it is a normative choice about the very purpose of our technology. Adopting a bias transforming approach requires developers and policymakers to make an explicit moral judgment about which societal biases are unacceptable and must be corrected. It forces a difficult but necessary conversation about what kind of society we want to build.</p><p><strong>What Are We Asking Our Algorithms to Be?</strong></p><p>The journey into algorithmic fairness quickly reveals that it is not a simple problem of debugging code. It is a complex, socio-technical challenge defined by unavoidable trade-offs, mirrored social injustices, and deep moral questions. There is no easy answer, no magic algorithm that can satisfy every definition of fairness for everyone, all at once.</p><p>This forces us to confront a more fundamental question. The central issue is not just whether our algorithms should reflect the world as it is or as it should be. It is a choice between competing visions of justice: Do we encode a formal equality that preserves the past, or a substantive equality that aims to build a different future?</p>]]></content:encoded></item></channel></rss>