“She got my heart on my sleeve, I’m not even mad.”
The voice? Drake.
The vibe? The Weeknd.
The artists? Nowhere near it.
The label? Furious.
The audience? Captivated.
The song? AI-generated.
The takedown? Swift.
A hit song, composed in seconds. Millions streamed it before learning it was synthetic. Elsewhere, a painting circulates online. The brushstrokes evoke Van Gogh—bold, swirling, expressive. The image was generated by AI, trained to mimic his style using public datasets and prompt-based rendering. Viewers admire the palette, the mood, the texture. Most do not realize it was never painted. Fewer still asked who authored the grief.
We are not witnessing the future. We are witnessing a shift in judgment. When AI mimics the masters, we often cannot tell. And when we do, we marvel at how well it performs. Yet when humans use AI to assist their own work, reputational caution kicks in. The same tools. Similar techniques. But different consequences.
When Assistance Is Masked
In many creative and professional realms, artificial assistance is often concealed. Not because it falters, but because it may be reputationally perilous.
Some writers have reportedly inserted typos, awkward phrasing, or casual slang into documents to obscure the use of AI. Even résumés and cover letters, once monuments to precision and polish, are being deliberately blemished in some instances. The irony is sharp: the very documents that once demanded flawlessness are now being intentionally degraded to appear more “authentic.” A clean page invites suspicion. Imperfection has become a reputational shield.
Designers proceed with similar caution. Generative platforms assist with light, texture, and composition, yet their role is often minimized, even denied. Attribution is avoided. The hesitation does not arise from the artifact, but from its genealogy. Human collaboration—directors, retouchers, illustrators—has always been accepted. But when the partner is synthetic, the calculus shifts, and silence becomes strategy.
Other creative domains and academia reveal the same tension. A professor may lean on an editor, a scholar on a research assistant, an author, on occasion, on a ghostwriter. These forms of help have long been regarded as permissible. Yet when the assistant is algorithmic, disclosure grows fraught. In 2023, a peer-reviewed journal retracted a submission not for falsehoods but for origins. The citations were accurate. The analysis was sound. But the provenance, even only as a supporting tool, was mechanical, and that was reason enough.
The dilemma is not quality, but authorship. More precisely: the unease that arises when collaboration is no longer wholly human. As AI becomes a routine collaborator, the question is not whether we use it—but how.
Why is AI-assisted work in the mundane and ordinary often dismissed as lesser, even when the human remains the architect of intent? When does assistance reflect thoughtful augmentation, and when does it slip into lazy reliance? And why, in contrast, do we marvel when AI mimics the greats—producing art in the style of Van Gogh, music like Mozart, or pop hits in seconds—without the same stigma?
Ancient Anchors on Modern Creations
To untangle these questions, we turn to the Epicurean philosophers. The Epicurean school is seldom invoked in debates over authorship. It is more readily associated with pleasure, simplicity, and the pursuit of tranquility. Yet beneath that reputation lies a rigorous devotion to clarity, intentionality, and the ethics of utility—qualities that resonate uncannily with present disputes over AI-assisted creativity.
- Epicurus taught that true pleasure rests in the absence of pain and disturbance (i.e. ataraxia). He sought clarity over ornament, sufficiency over excess, and the alignment of means with purpose. For him, value did not reside in extravagance, but in the calm utility of what serves without confusion.
- Lucretius in his De Rerum Natura held that nature is to be grasped through reason rather than reverence. He sought to unveil the workings of the world, stripping away illusion and awe so that truth might be seen plainly. In his view, appearances (semblance) are not reality; resemblance cannot substitute for underlying nature.
- Philodemus, writing on poetry and art, argued that style reflects the author's sensibility and experience. He held that form alone is not enough; true expression arises from skillful arrangement and coherence. For him, imitation that lacks such grounding is hollow—it may have shape, but it lacks force and fails to give pleasure.
Presence, Not Spectacle
Epicurus: Augmentation Is Not Abdication
Epicurus did not worship toil. He did not conflate effort with virtue, nor did he elevate struggle as a noble path. For him, true pleasure lay in tranquility—a life untroubled, a mind unconfused. Tools, in his view, were welcome only if they served discernment. If they muddled the waters, they were distractions.
Were he among us today, Epicurus would likely not reject artificial assistance. He would, however, inquire: Does this tool promote ataraxia, helping us maintain mental tranquility, or does it obscure authorship and intent?
Much of our discomfort with AI stems not from its capabilities, but from its ambiguity. We label some works “AI-generated” and treat them with suspicion. Others, “AI-assisted,” pass quietly—even when the machine’s hand is heavy. Epicurus would likely find this binary imprecise. He would call for a taxonomy that reflects purpose, not just presence.
One such effort is the GAIDeT framework—Generative AI Delegation Taxonomy—a proposed system for disclosing how AI was used in academic and creative work. It breaks assistance into categories: idea generation, drafting, editing, analysis. The goal is not to shame the use of AI, but to clarify its role. Yet adoption remains uneven. Some journals and platforms have embraced it, while others did not due to various reasons such as editorial inertia, lack of standardization, or perhaps discomfort with too much transparency. Whatever the cause, the result is fragmentation.
Even in publishing, where AI is routinely used for grammar, structure, and tone, the deeper questions often go unasked. Did the tool support the author’s intent—or smooth it into something more generic? Guidelines from publishers like Wiley, Oxford University Press, and Elsevier now encourage responsible use, placing final accountability with the human creator. But the evaluative lens is still nascent.
Epicurus would urge us to examine not just whether AI was used, but how—and to what end. He would ask whether the tool preserved agency or eroded it, whether it illuminated the creator’s intent or replaced it with something refined but less authentic. Augmentation, when wielded with care, is not abdication. It is still a form of craftsmanship.
He would, however, also recognize that while disclosure efforts are well-intentioned and reflect bona fide attempts at progress, they remain fraught with complexity. For when disclosure becomes too inconsistent or burdensome, it no longer serves clarity. It becomes noise, and the process breaks down—not just ethically, but practically. Epicurus might thus suggest that rather than aim for exhaustive accounting, we seek to design a disclosure system grounded in measured candor and thoughtful transparency—one that informs rather than inundates.
Lucretius: Originality Is Arrangement, Not Alchemy
Lucretius believed that nothing comes from nothing. All creation, he taught, is recombination—atoms in motion, colliding, swerving, and joining according to natural law. Originality is not divine invention, but careful arrangement. Just as the poet does not invent language anew, he selects, orders, and clarifies from what already exists.
This lens sharpens our view of AI collaboration. When the human remains the architect—curating prompts, refining outputs—the machine becomes a tool of arrangement, not a replacement for originality.
Consider Refik Anadol, a media artist known for transforming data into sensory experience. In his exhibition Unsupervised at MoMA, he trained AI models on the museum’s collection, reinterpreting over two centuries of its art as a dynamic, ever-changing visual experience. The result was immersive digital art—machine-assisted but distinctly shaped by his curatorial vision. The effort is shared, the arrangement, his.
Or take Sougwen Chung, a Canadian artist and researcher whose work explores human-machine collaboration. In her performance Drawing Operations (Duet), she paints alongside a robotic arm named D.O.U.G., trained on her own gestures. The robot responds in real time, creating a duet between memory and motion, between artist and algorithm. The art is collaborative; the authorship unmistakable.
We celebrate these uses of AI in their brilliance and creative ingenuity. We also tend to marvel when machines replicate the styles of the great masters. AI-generated portraits in Van Gogh’s style echo texture but not temperament; mimic brushstroke, but not biography. Still, we praise the precision. We applaud the spectacle.
Yet, when the same tools are used by everyday creators—students, freelancers, hobbyists—the reception often shifts. An article refined by AI, a portfolio polished by prompts, a report shaped by AI-driven insights—these would likely not be viewed with the same grace and amazement. Instead, the work may be considered lesser, or even somehow dishonest. The standards we apply are uneven, and the reasoning behind them—often guided more by perception than principle.
Lucretius might encourage us to take a different, more consistent stance. He would ask: is this a new joining, a new combination? Creation, he would remind us, is not a matter of conjuring the new, but of arranging the known. The materials may be shared, the forms familiar, but if the creator remains present, and the machine is used to extend rather than replace, he would posit the result is no less real.
Philodemus: Imitation Is Not Theft When It Honors the Source
Philodemus took it a step further. He did not immediately treat imitation as betrayal, but as a question of fidelity: to clarity, coherence, and the pleasures the work provides. A work may echo another; a phrase may borrow its rhythm. However, if the creator’s guiding sensibility remains present, and the imitation is executed with discernment, the act is not derivative. It is deliberate.
Consider Holly Herndon, an experimental musician who trained an AI named “Spawn” on her own voice. The AI became part of a collective, extending human range rather than replacing it. The result was not a facsimile, but a new, intentional sound—an extension that honored its source and delighted its listeners.
Or David Cope, who developed an AI composer named Emily Howell. Howell built upon the logic of imitation, recombining musical patterns into new possibilities. Cope treated the machine as a partner, using its outputs to explore ideas he might never have imagined alone. The process was transparent. The authorship, clear. The imitation, purposeful.
In both cases, the expression is coherent, the source is honored, and the output gives pleasure. Contrast this with the ongoing challenges faced by platforms like Spotify. In 2023, it purged tens of thousands of AI-generated tracks designed to mimic human music and exploit streaming algorithms. But the problem persists. To date, AI-generated albums continue to appear under real artists’ names—sometimes even deceased ones—without consent. Attribution remains elusive; oversight, increasingly untenable.
The same tension plays out in visual art. Lawsuits against generative image platforms have intensified. Getty Images’ case against Stability AI remains active. A major class-action suit in the U.S. is heading to trial. And Disney and Universal have filed sweeping claims against Midjourney for enabling mass-scale generation of copyrighted characters. What begins as imitation becomes evasion—replicating style without honoring origin, scaling mimicry without regard for coherence or consent. Even the training itself is built on scraped datasets of human creativity used without credit, compensation, or transparency.
Philodemus would likely insist that true aesthetic judgment attends as much to the creation process as to the finished work. Not just what is made, but how it is arranged and realized. He would not reject the tool, but would ask whether the imitation preserves coherence and skill, serves pleasure, acknowledges lineage, and respects the source material. These are standards that seem simple, even self-evident—yet they are arduous to uphold in systems built for speed, scale, and profit rather than careful discernment. And they matter all the more, not for their simplicity, but because they are often so easily overlooked.
As AI threads itself into classrooms, studios, and workplaces, the standards by which we judge assisted creativity will shape not only reputations, but futures—and the stakes are far from trivial. They are rooted in livelihoods, identities, and years of effort, skill, and investment. Creative work has always been more art than science—not only in its making, but also in its judgment, governance, and endurance. Patchy. Elusive. Messier than we like. Harder than we wish to admit.
Yet we must persist. To preserve that which is beautiful, deliberate, and worthwhile. Starting perhaps with adopting a more consistent yardstick: to applaud even when it is ordinary, trust even when it does not dazzle. To judge by authorship, not spectacle.
From the AI Conundrums and Curiosities: A Casual Philosophy Series by Jacquie T.
Comments ()