Your Cart
Loading

Retaining Agency: Coherence and Stewardship in an Automated World

Overheard:

A jobseeker laments: “I used to feel like I had a chance when a real person read my CV. Now I get rejected before anyone even sees it. It’s all algorithms and keywords—and half the time, I’m not even sure what they’re looking for.”

A journalist mutters: “My headline was rewritten by the system. They said it would resonate better. But does it really still say what I meant?”

A teacher sighs: “I used to decide how to group my students. Now the platform does it before I’ve even met them. I’m not sure that’s how learning should begin.”



Control, in its traditional sense—manual, explicit, unambiguous—is no longer what it once was. The systems we inhabit move faster than we can consent, and operate deeper than we can comprehend. They do not wait for human input. They structure it. They neither ask for permission nor pause for deliberation. They anticipate.


Across industries, hiring algorithms trained on historical data have quietly learned to prioritize “fit.” In practice, this often means filtering out candidates who deviate from past norms—women, older applicants, those from non-traditional backgrounds. The shortlist arrives already filtered. Interviews proceed. Few stop to ask who never made it onto the list.


Elsewhere, in newsrooms, headlines are often rewritten by systems optimized for clicks. Thoughtfully crafted lines by journalists are reshaped by algorithms promising statistical resonance. The resulting headlines do attract attention—but the prioritization has shifted from creative nuance to engagement.


These are not isolated occurrences. Increasingly, we find ourselves ceding control—and relevance—to AI. Yet, if we were to look more closely, what emerges is not the loss of control but its transfiguration. The system acted exactly as it was designed to behave, following the logic we embedded and the priorities we encoded.


The issue at hand, then, is not a crisis of control but a matter of authorship. And the question is not whether we still control these systems, nor whether we remain the sole architects of decision-making. Rather, it is how we remain relevant and coherent within the processes now operated by the very systems we have unleashed.



Ancient Anchors on Modern Constraints

Here, we turn to the Stoic philosophers. They did not speak of algorithms or automation, but they understood constraint, and what it meant to live within systems one could not wholly steer.


  • Epictetus taught that freedom begins with discernment: to know what lies within our control, and to relinquish what does not. His concern was not with external mastery but internal authorship. The world may bring loss, injustice, or confusion—events we cannot govern—but the sovereign domain of response remains ours alone, which he saw as the true wellspring of peace and freedom.
  • Seneca cautioned against ensnarement by outcomes beyond our reach. He urged clarity, restraint, and the discipline to act without distortion, cultivating a mind that anticipates adversity without surrendering to fear. For him, the good life was not secured by avoiding hardship but by meeting it with steadiness and reason, recognizing that true misfortune arises not from the event itself but from our response to it.
  • Marcus Aurelius insisted upon principled action even when expedience promised reward. He held that the purpose of life was to live in accordance with reason and virtue, and that happiness rests not on circumstance but upon the caliber of one’s character and the justice of one’s deeds.

The Stoic philosophers were not theorists of control; they were strategists of constraint. Their relevance for us lies not in offering consolation or moral instruction, but in illuminating how one might navigate constrained agency and remain relevant—especially when systems move faster than deliberation.



Designing AI-Mediated Decision-Making with Intention 

Epictetus: Design for Discernment

If Epictetus were designing AI systems today, he would not ask us to control every variable, nor would he urge us to resist automation out of a desire to reclaim total control. Instead, he would likely ask us to clarify where human judgment belongs—and to ensure we give serious attention to that space. To design systems that enable principled action while recognizing the limits of our control. Not perfect. Not heroic. Just possible.


In processes of evaluation, such as AI-mediated hiring, this translates into systems that integrate human judgment into the decision-making process in a way that is reasonable, reviewable, and structurally supported.


For instance, talent experience platforms such as Phenom and TalentRx have reportedly integrated explainable AI into their screening processes—highlighting which skills or keywords influenced automated decisions, and offering visibility into sourcing, scoring, and engagement. Recruiters see not just who was excluded, but why—enabling greater transparency and space for human judgment, without disrupting workflow or placing undue burden.


Seneca: Emotional Clarity Without Abdication

In the realm of judicial risk assessment, where AI tools influence decisions on bail, sentencing, and recidivism, emotional neutrality is often upheld as a virtue. Yet, when neutrality is unexamined, it can veer into ethical disengagement. The machinery hums; the human fades.


Seneca would caution against being emotionally distorted by outcomes beyond our control, but he would also remind us that unchecked detachment becomes abdication. Emotional clarity—the reasoned engagement with what can be controlled—not emotional absence, is the mark of discernment.


He might not ask judges to become philosophers, but instead, urge institutions to build systems that preserve cognitive clarity—even when the machinery tempts procedural autopilot. A systemic check-and-balance. Not as philosophical ornamentation, but as practical guard-railing.


In contemporary judicial reforms, we glimpse efforts that seem to echo these intentions. The Law Commission of Ontario’s ongoing project on AI in criminal justice is examining how courts might build greater visibility into algorithmic outputs, with a focus on empowering judges to document and justify deviations from automated recommendations. While the work remains emergent and under discussion, it gestures toward institutional memory, accountability, and emotional clarity—rather than blind reliance.


Thomson Reuters, in partnership with the National Center for State Courts, has published an ethical framework for AI in judicial contexts. It introduces structured risk classifications—low, moderate, high, and unacceptable—and mandates human oversight for all high-impact decisions. Though not yet universally adopted, the framework serves as a critical reference, helping courts embed safeguards that prevent moral drift and protect public trust.


Marcus Aurelius: Legibility Between Principle and Process

And what about Marcus Aurelius, the emperor-philosopher? What would he advise?


He wrote extensively about accepting one’s role within a larger order, and about acting with integrity even when the system itself is imperfect. In today’s context, one reckons he would not ask institutions to be flawless, but instead insist that their actions be legible—that even when automated, decisions remain interpretable against the principles they claim to uphold.


This emphasis on the ability to trace decisions back to values can be applied in the context of AI-driven loan approvals in financial services. Here, principled action means designing systems that consider more than raw risk thresholds. It means evaluating applicants through a lens that reflects institutional commitments to fairness and inclusion.


Parlay Finance, a company providing AI-driven tools for lenders, offers one such example. Applicants who are initially declined are reconsidered using alternative credit metrics: cash flow patterns, gig income, caregiving responsibilities. The system does not guarantee approval. It simply ensures that marginal cases are not dismissed without context.


Plaid, and similar fintech services, extend this idea. Applicants who were declined can link bank accounts, allowing a review based on transaction history, income stability, and non-traditional indicators. Millions of “unscorable” individuals gain access to credit, while risk controls remain intact.


From recruitment to justice to finance—a throughline of intentional design connects these examples. Each system represents a deliberate effort to harness AI in augmenting human judgment and enhancing procedural transparency. Though imperfect and far from panaceas, they stand as concrete efforts to confront the question of authorship—across sectors, disciplines, and domains.



A New Kind of Control

The Stoic philosophers do not advocate resignation. Nor do they offer any illusion of retaining absolute control. What they offer is orientation: to design with intentionality and clarity—within system architecture, review protocols, and institutional culture—and to respond not with frantic reactivity, but with deliberate coherence.


Coherence is not a luxury. It is the architecture of agency. In an age when systems move faster than consent, and operate deeper than comprehension, coherence remains one of the few forms of authorship still within reach.



From the AI Conundrums and Curiosities: A Casual Philosophy Series by Jacquie T.