You finish your draft.
Crisp grammar. Smooth tone. Polished flow.
It reads perfectly well.
Just not quite like you.
You open your playlist.
Familiar beats. Predictable moods. Safe choices.
It is what you tend to like.
Though not what you want right now.
We live in an age of algorithmic intimacy. Writing tools complete our sentences. Music platforms anticipate our moods. Résumé builders shape our professional narratives. The promise is personalization—systems that learn us, adapt to us, reflect us. And yet, the more they learn, the more they converge. The outputs are competent, relevant, even impressive. But they carry the scent of generality—smoothed edges, averaged tone, a voice that could belong to anyone.
This is not a flaw in the ideal of the bespoke, but the manifestation of intended design. For AI systems are built to generalize. They extract patterns from vast datasets, distilling what is common, repeatable, and safe. They do not seek the singular. They seek the scalable.
But what happens to the individual when intelligence is trained on statistical distributions? What can we do when tools designed to reflect us begin to erase what makes us distinct?
Illuminating Arches on Modern Individuality
Here we turn to the thinkers of the Enlightenment—an era that first grappled, in earnest, with the tension between reason, experience, and the nature of the self. Their propositions were not idle speculations, but rather, attempts to define what it means to know, to act, and to be.
- Immanuel Kant placed his faith in reason. He believed that through rational reflection, one could uncover universal moral laws—principles that apply to all rational beings, regardless of circumstance. For him, the dignity of the individual did not lie in their uniqueness, but in their capacity to act according to these shared principles.
- David Hume, skeptical of abstractions, argued that human understanding is rooted not in pure reason but in experience, habit, and sentiment. He believed that knowledge was always contingent—shaped by context, emotion, and the particularities of life.
- Jean-Jacques Rousseau was concerned with the ways society molds and suppresses the individual. For him, the authentic self is often buried beneath layers of expectation and conformity. He believed that true freedom lies in reclaiming one’s voice from the pressures of collective norms.
These thinkers present a rich and conflicting lens: Kant’s universals, Hume’s particulars, Rousseau’s insistence on authenticity. They do not speak to our technologies, but they speak to our dilemmas. And together, they invite us to think—more clearly, more deeply—about what it means to be known, and to remain oneself.
Designing to Preserve the Individual Within the System
The tension between individuality and generalization plays out across the systems we use daily—from how we write and learn, to how we listen, watch, and present ourselves. These systems function. Efficiently. As intended. Yet, intention alone does not ensure adequacy. Their design, once sufficient, is starting to show its limits. What might the Enlightenment philosophers say to that, then?
From Fluency to Expressive Control
AI-powered writing tools—from Grammarly to Jasper to Notion—offer fluency, structure, and speed. They correct grammar, suggest tone, even generate full paragraphs. But studies show a creeping sameness. A 2024 Georgetown analysis found that essays written with AI assistance showed reduced semantic diversity and stylistic variation. A more recent study from Cornell University noted that AI-suggested writing tends to nudge users toward Western academic norms, even when their native styles differed. The result is writing that generally reads well, but contains narrower vocabulary and expressions.
This flattening persists despite the use of memory features. AI writing tools track preferences and learn surface-level patterns—formal vs. casual tone, common word choices, sentence length—but they do not truly understand individual voice. They optimize for consistency and fluency, not for stylistic depth or expressive range. Trained to minimize ambiguity and to align with dominant norms, these systems steer distinctive rhythms or cultural idioms toward what is deemed statistically “better.” Even as the tools supposedly “learn” us, they often teach us back a version of ourselves that is easier to process. Personalization becomes a mirror polished by generalization.
Kant, with his reverence for clarity and coherence, might appreciate the value of these tools as they help users express themselves in ways that are legible and rational—hallmarks of moral and intellectual dignity in his view. However, he would likely insist that such tools be used in service of autonomy, not conformity; to give users control over the principles that shape their expression.
Hume, on the other hand, might be uneasy. He held that knowledge is contingent and inseparable from experience. Writing stripped of emotional texture and lived nuance becomes form without feeling. He would likely advocate that designers preserve the experiential richness of language, not merely its structure.
Building on that concern, Rousseau would likely be the most insistent. He would caution against society’s norms smoothing and suppressing the individual, and counsel designers to protect the authentic voice in system design.
Such considerations are being reflected in recent designs. AI tools now allow users to set custom tone and voice parameters, giving more control over how their writing sounds. For instance, Sudowrite includes “emotion sliders” that let users shape the mood and texture of their output, while education platforms like Khanmigo are experimenting with adaptive response styles—adjusting tone, length, and complexity to help students preserve their voice as they receive guidance. These features are still early, still imperfect—but they mark a shift: from fluency alone to expressive control. From output to voice.
Allowing Both Prediction and Permission
Entertainment platforms are built on behavioral prediction. They learn user habits, compare them to millions of others, and serve up what others like us tend to enjoy. But the result is often a narrowing of taste. Various studies and industry analyses have shown that algorithmic recommendation systems often reinforce mainstream content and reduce exposure to diversity. On platforms like Spotify, Netflix, and YouTube, users are frequently looped into familiar content clusters even when they show interest in niche or experimental material, making discovery harder over time.
While Kant might see value in the system’s structure—an orderly mapping of preferences that respects rational choice—he would likely caution that true dignity lies in the freedom to choose, not in being chosen for. He might suggest that systems be designed to support autonomy by offering meaningful choice, not just efficient prediction.
Hume, attuned to the fluid nature of taste, would likely point out that preferences are shaped by mood, memory, and moment. A system that ignores emotional nuance cannot truly grasp what a person wants. He would encourage designers to build responsiveness to context, not just pattern.
Rousseau would ask the harder question: is this really one’s taste, or simply what fits one’s profile? He saw conformity as a threat to authenticity and would urge the creation of space for users to reclaim their preferences from the weight of collective norms.
In response to these tensions, some platforms have begun to experiment. Spotify’s “Daylist” adapts to time-of-day emotional shifts, offering playlists that reflect mood rather than habit. Its “Enhance” feature lets listeners opt into algorithmic suggestions, giving users more control over how recommendations appear. YouTube, too, has introduced ways for viewers to influence their feed—by clearing watch and search history or adjusting feedback settings to reshape what the system learns. Netflix once offered a “Surprise Me” button to break habitual viewing patterns, introducing randomized recommendations outside typical genres, but discontinued it in early 2023, reportedly due to low usage.
The mixed success reflects a deeper design challenge—one that is not just technical, but also conceptual: how to build systems that balance prediction with permission, familiarity with discovery, and efficiency with individuality. Systems that respond to users without reducing them, that guide without enclosing. Progress is uneven, but it reflects a growing awareness that personalization must evolve: not only to anticipate what users tend to choose, but to leave space for what they might choose next—and to let that choice be theirs.
Balancing Optimization and Narrative
Résumé builders and productivity platforms prioritize efficiency. Tools like Rezi, Resume.io, and Canva’s templates are designed to optimize for applicant tracking systems. They emphasize keywords, formatting, and impact statements—elements that improve visibility but may flatten individuality.
Kant might begin with an appreciation for the fairness of standardization. Comparing candidates on shared criteria seems aligned with his emphasis on rational structure and equal treatment. However, he would likely insist that individuals still be treated as ends in themselves—not reduced to metrics. From this view, systems should allow space for self-definition, not just system optimization.
Hume would take the conversation further. For him, understanding is rooted in lived experience, sentiment, and context. A résumé, stripped of story, becomes a hollow record. He would likely encourage designers to preserve narrative texture—even if it means sacrificing a degree of efficiency.
Where Hume worries about what is lost, Rousseau would be concerned about what is imposed and its inherent risk of suppression. His counsel would likely be to build tools that help users express who they are—not just what they have done—in order to resist the pull of conformity and protect authenticity.
Some platforms are indeed beginning to move in this direction. For instance, LinkedIn’s “About” section encourages storytelling alongside credentials. Canva offers “life story” résumé formats that foreground narrative over keywords. Creative platforms like Notion and Carbonmade allow users to present portfolios in expressive, non-standard ways. Even productivity apps like Notion and Obsidian are being used to build personal knowledge systems that reflect how individuals think—not just what they produce.
These are more than aesthetic choices. They illustrate a shift towards prioritizing narrative. From simply fitting the mold to defining one’s own frame.
Across these examples—from writing tools to entertainment platforms to résumé builders, and beyond—it is clear that system design does more than organize function. It shapes expression, choice, and selfhood. The original goals may have been pragmatic—fluency, prediction, optimization—and the systems fulfill their mandates. However, as needs and contexts evolve, and as the realization grows that the balance has tipped toward marginalizing the self, the imperative changes. We must reconsider not just how these technologies operate, but how we engage with them and the broader implications of their use. As the philosophers might remind us: AI systems should be built not merely to serve, but to respect and protect.
Because the goal goes beyond personalization to recognition—the recognition that one can be unique yet relatable, singular yet in communion, different yet not apart. Recognition that can only be attained when systems stop anticipating, “What would most people, or people like you, do?” and instead start asking, “Who are you, really?”
From the AI Conundrums and Curiosities: A Casual Philosophy Series by Jacquie T.
Comments ()