Your Cart
Loading

When Systems Influence Outcomes: How Do We Act Responsibly, Collectively?

Our systems gather data faster than we gather wisdom.

They predict behavior before we understand context.

How, then, do we know whether we can trust their output?

And how can we be sure we have acted rightly?



This is not a rhetorical flourish. It is a challenge to our ethical infrastructure. The issue is not whether we can act, because clearly, we can. The real question is: do we know how to act well? And by what standard will that “well” be measured?


Artificial intelligence does not merely assist. Increasingly, it decides. It reaches into nearly every domain that shapes our lives: governance, law enforcement, healthcare, employment, education. It influences who receives a job interview, which students are offered extra support, what neighborhoods fall under closer scrutiny. Sometimes the stakes are high—predicting crime, scanning faces in a crowd. Other times they seem almost trivial—drafting a report, suggesting the next purchase.


Yet even in these smaller, everyday uses, something more fundamental is revealed. We have built systems that can act efficiently, creatively, and at times astonishingly, but the ethical scaffolding to guide their actions are still uneven and in development. And therein lies the disjunction between our technical ingenuity and ethical preparation.


What does this mean for us, living alongside and within these systems? After all, we are not simply participants. In many ways, we are their caretakers, perhaps even their architects. The challenge is not capability, for that is present in abundance. Nor is it clarity in the abstract, for most of us hold some sense of what responsible use might mean. The challenge is finding a clarity that is shared—a clarity grounded in standards that are practical, ethical, and ultimately defensible.



Ancient Anchors on Modern Ethics

Here we turn to the classical philosophers. Centuries may separate us, but the questions they asked remain strikingly relevant in the age of AI. They do not offer easy answers; instead, they provide a disciplined way of thinking—a set of tools to cultivate judgment that is thoughtful, careful, and principled.


  • Socrates believed that virtue begins with inquiry—not idle questioning, but rigorous, uncomfortable interrogation aimed at moral clarity. For him, knowledge and virtue are closely linked; to know the good is to do the good. Reflection and dialogue are the pathways to understanding how to live rightly.
  • Plato sought the ideal Good. Not as an abstraction, but as a standard against which all action could be measured. He emphasized that justice, truth, and the pursuit of excellence must frame human conduct, and that mere utility or expedience is never sufficient.
  • Aristotle grounded ethics in the realities of human life. Virtue, for him, is cultivated through practice, habituation, and attention to context. Ethical action is judged not by theory alone, but by whether it fosters flourishing in individuals and communities.

The philosophical traditions of Socratic rigor, Platonic aspiration, and Aristotelian wisdom-in-context have laid the groundwork for generations of reflection on how to live well and act rightly in society, and continue to serve as time-tested lenses to examine the most pressing questions of our time.



When Systems Decide

If Socrates, Plato, and Aristotle were here with us today—not as marble busts in empty halls, but as living minds seated beside us in policy rooms and data labs—what would they say?


One would venture that they would not be hypnotized by the elegance of our models or the finesse of our forecasts. Rather, they might inquire: What assumptions underlie these systems? Whom do they privilege? What costs do they carry?


On Predictive Policing: When Data Predicts Danger

In Chicago, algorithms sifted through years of arrest records, addresses, and social ties to produce a “heat list” of people labeled as potential threats. These were not individuals caught in the act, but those whom the data suggested might be dangerous. The system did not ask whether those records were fair, nor whether labeling someone “high risk” would change how they were treated. Once flagged, people were more likely to be stopped or surveilled, and the prediction began to shape the outcome.


Across the sea, in Durham, England, a similar mechanism was at work. The Harm Assessment Risk Tool (“HART”) classified those in custody as high, medium, or low risk of reoffending. It relied on decades of policing data, in which some communities had been watched and arrested more often than others. The algorithm absorbed these patterns without question, and in doing so risked becoming a loop—repeating and reinforcing the very assumptions it was built upon.


Then there was the Amsterdam’s “Top 600” list. It named those thought most likely to commit violent crime, many of them young men of Moroccan descent. To appear on the list carried consequences: closer monitoring, heavier scrutiny, and the stigma of being treated as a risk before a crime was ever committed.


Chicago, Durham, Amsterdam—three cities, three algorithms, one pattern. Data from the past was taken as neutral ground for the future. The algorithms did not ask where those patterns came from, or whether they told the whole story. They simply carried them forward. In the process, prediction blurred into prescription, and yesterday’s records began to write tomorrow’s fate.


On Facial Recognition: When Identity is Presumed Absolute

Elsewhere, facial recognition software generates its own cascade of consequences.


In Detroit, Michigan, Robert Williams was wrongfully arrested after a facial recognition system misidentified him as a suspect in a store theft. Though he had an alibi, the technology proceeded without questioning its own accuracy. The burden fell upon the individual. The system bore none.


In Hong Kong, during the pro-democracy protests of 2019, facial recognition cameras were among the tools of surveillance present in the city. Protestors adapted in response, turning to masks, umbrellas, and even laser pointers to preserve anonymity. In this instance, the technology did not err in its identification; rather, its very capacity to identify influenced participation.


Two different examples, one unifying theme. Whether mistaken or precise, facial recognition systems, when left unquestioned, press their impact upon people. In a world where ethics still matter, accuracy alone is not sufficient. Justice, consent, and accountability must remain integral to the equation.



Toward Higher Standards

These are not edge cases. They are clear illustrations that our systems are capable, yet neither wise nor ethical.


Technologies do not ask questions. They execute. So we must be the ones to ask—and seek answers—collectively. Not because we fear progress, but because we care—for people, for communities, for the consequences of our creations.


What, then, might we do to instill better judgment and responsibility?


It might begin with auditing datasets for bias—not as a checkbox, but as a moral imperative. It might involve local communities in policy decisions, ensuring that those most affected have a voice. For example, in 2020, Toronto’s police service paused its use of facial recognition and initiated public consultations. While not a perfect solution, this move signaled a critical first step toward public accountability, though the ongoing debate over the service’s decision to upgrade its facial recognition technology shows that the challenge of finding a shared ethical standard is far from over.


It might also demand rigorous testing and transparent reporting of error rates. High-stakes decisions should include human oversight. In Michigan, the wrongful arrest of Robert Williams led to a historic settlement and mandatory police training. This case became a catalyst for a national debate on the use of facial recognition by law enforcement. The system erred. The city responded. The nation weighed in. That, too, signals movement toward a higher standard of responsibility.


Perhaps, as revealed most starkly in the case where facial recognition shapes human behavior, it would require that we ask not only what a system can do or how accurate it is, but what it ought to do. That we measure success not in efficiency, but in equity. That we treat citizens not as data points, but as participants in a shared moral project.


For this responsibility belongs not only to courts and law enforcement agencies, but to classrooms, hospitals, workplaces, marketplaces—to every domain where algorithmic determinations displace human judgment, where human life has begun to bend beneath the weight of artificial decisions.



Answering the Ethical Call

If Socrates had a dashboard, he would not be impressed by its predictive power. He would ask what questions it fails to raise, what assumptions are embedded in its data, and whether its conclusions lead toward understanding the good.


If Plato were handed a dataset, he would not marvel at its size. He would ask whether it reflects the good, whether it serves justice or merely expediency.


And if Aristotle were shown a model, he would not inquire first about its accuracy. He would ask what it means for the polis, whether it promotes trust, equity, and the flourishing of the communities.


Today, the systems act. They do not discern. We are the ones who must notice, decide, and steer. How we do so—collectively, ethically, defensibly—remains the hardest question. Perhaps, with Socrates, Plato, and Aristotle alongside us, guiding us on this urgent, necessary quest, we might just find the right way forward.



From the AI Conundrums and Curiosities: A Casual Philosophy Series by Jacquie T.