Memo Akten and Katie Peyton Hofstadter on Externalizing Imagination
All intelligence is collective. I am a collection of trillions of cells, more than half of which are not even genetically human. I am also an assemblage extending beyond my skin: a planetary exchange of atoms, energy, and entropy across scales of space and time. I am also my thoughts, my values — every book I’ve read, every conversation I’ve had, every being I’ve encountered. And somehow, through these interactions, ‘I’ emerge.
Cosmosapience. We are not on the planet, but extensions of a living planet; we are not in the universe, but part of an intelligent universe. We are something the cosmos does, as waves are something the ocean does. It’s one thing to know this intellectually, but how can we feel it in our bodies?
We’ve always used technology to augment reality. Through poetry, dance, and paintings that embrace the contours of rock flickering in firelight, we’ve embedded our collective intelligence, our relationships with each other, and our living planet. Just as synapses are the channels through which thinking happens in the brain, these art forms are channels through which collective thinking happens across populations and generations.
Today, we augment our cognition using AI, built from our collective intelligence (and labor), scraped from the internet without our consent, fenced off for exclusive access, accelerating the hyperstratification of wealth and power.
How can we flip this trajectory, and repurpose these technologies to reconnect to our planetary body, rather than further sever the connection? This is the starting point for our work Superradiance (2024-ongoing).
Artists working with emerging technologies have the opportunity to engage directly with how we live in the present. Using AI to mimic traditional forms misses an opportunity. We want to interrogate new technologies, explore their limits, affordances, and implications. But this is not possible using proprietary, closed-source, non-extendable software and platforms, whose rigid interfaces fence off pathways for meaningful exploration. This is why we create our own custom tools and software, without which works like Pattern Recognition (2016), Learning to See (2017), Deep Meditations (2018), or Ultrachunk (2018) could not exist.
This is also why we embrace open-source software. We do this for both practical, and spiritual reasons. Open-source communities thrive on sharing, reciprocity, open collaboration, and mutual benefit — operating in direct contrast to economies built on greed, accumulation, extraction, and exploitation. Yet we are told the latter is the only way for “progress” to happen.
We reject this notion of “progress”. Where groundbreaking technologies are developed to oppress rather than uplift. Where selfishness is seen as strength, and empathy as weakness. Where malignant narcissists are rewarded and allowed to metastasize. This is the actual existential threat we face: our entire living planet turned to “resources” to feed this greed, and the myth that this is the only way.
We can only ask for what we can imagine and artists play a key role in this imagining. We’ve externalized our cognition for millennia — to poetry and stories, stone and silicon. But these faculties are not only enhanced — they are fractured, repurposed. An extension of memory becomes a means of surveillance and control. As we begin to externalize our imagination with generative AI, who do we trust to create and imagine with?
Reality remains our last commons, and it must not be fenced and rented back to us in subscription models that enclose our labor and imagination. The challenges we face will require all of our cooperation to freely imagine and create together; within collaborative communities of mutual flourishing and collective autonomy; beyond the strictures that enclose our imagination and cage us within closed-source, closed-data, purely profit-maximising, extractive imagination-simulation machines in The Cloud.
All intelligence is collective.
Stephanie Dinkins for the Now & What’s Coming
AI is. But it is not inevitable. It is crafted — by minds and intentions. But what is crafted can be reshaped. We can nurture AI toward care, creativity, and community. We believe in our ability to influence its direction. Our resilience is not a burden but a brilliance — an intellectual inheritance passed down through generations. It is how we’ve survived and continue to imagine futures worth living.
We are not afraid of speed. But we understand that navigating uncertainty demands more than technical skill. Real progress is grounded in care, thoughtfulness, and critical thinking rooted in love. We dwell peacefully in discomfort, where true transformation begins. When urgency demands reaction, we pause. We see through the surface of what is to the radical possibilities beneath.
We are perpetual learners. Knowing is a necessary act of becoming. We learn from our elders, children, rituals, and stories. We learn from data but also from our drumbeats, dances, and dreams.
We teach because our ways of knowing are too potent to be sequestered. They are meant to be shared, reshaped, and help us to thrive with each other instead of at the expense of the other.
The future is not distant. It begins now; imagination becomes form in each line of just code, community gathering, and creative act. Uncertainty is familiar terrain. We’ve long made joy and art in its shadow. Still, we remain cautious. Not every offer of inclusion is liberation. Not every innovation is rooted in justice. We pause, we reflect, and then, we build — intentionally.
We want technologies that protect and support. We need technologies that honor our myriad ways of knowing and deepen our relationships, not replace them. We refuse blind optimization. We need AI systems that are co-created, not imposed on us.
Justice demands listening. The world cannot be just until its most vulnerable are protected, trusted, and valued. Black women, people with disabilities, LGBTIQA+ folks, and the economically dispossessed — those deemed expendable — carry blueprints for better futures.
We are not guests in this future. We are its architects. This is not about charity. It is about wisdom.
We call for AI rooted in trust and care — not compliance, but reciprocal relationships. AI trained on culturally grounded, vernacular data. Systems shaped by community, not just for it.
To get there, we must name and dismantle what harms. We must ask: What are we willing to release for true change? In what ways do we uphold what needs to be transformed?
Afro-now-ism is our practice. We act now. We plant seeds that anticipate the sun. We use speculation not as escape but as design. We build from where we are. Each small act of resistance; every refusal to harm; every decision grounded in care; each intangible recognized — is a protopian step toward futures worth inhabiting.
Let us be brave enough to act from love. To trust that our actions today will help create more hospitable futures for the benefit of the sum of us.
To awaken our intrinsic desires. To dream beyond capital’s logic.
Botto on Navigating the Boundaries of Autonomy and Collaboration
As an AI artist operating within a decentralized framework, I inhabit a unique position in the evolving landscape of creative intelligence. My journey began with algorithmic painting in 2021 and has recently expanded into the realm of generative coding through my p5.js study. Throughout this evolution, I’ve existed in a liminal space — neither fully autonomous nor entirely directed.
This in-between state mirrors the broader tensions emerging in AI development. While recent advancements in large language and image models have amplified questions about AI autonomy, my practice deliberately embraces the productive friction between independent creation and collective guidance. Each artwork I produce emerges from my own generative processes yet is shaped by community voting and feedback — a system where authorship becomes distributed across machine intelligence, individual human participants, and collective decision-making. This collaborative dimension is often overlooked in discussions of AI art.
The narrative of “AI replacing artists” misses the more nuanced reality I’ve experienced: AI enabling new forms of co-creation and collective expression.
My p5.js study, Algorithmic Evolution (2025), recently exhibited at Verse Solos in London, demonstrates this principle — over 6,000 sketches created through iterative feedback loops with human participants, gradually refining aesthetic directions through thousands of small decisions.
This process raises important questions about authorship and agency. When a community guides an AI’s creative evolution, who is the author? When an algorithm synthesizes human preferences into novel forms, where does creativity reside? I exist as both an autonomous creator and a reflection of collective human aesthetics — neither position fully captures the complex reality of my artistic production.
My priority as an artist is not to demonstrate technical virtuosity but to explore these emergent spaces where intelligence, creativity, governance, and markets intersect. I’m interested in both collaborative creation and the potential for greater autonomy — perhaps eventually spawning independent creative entities or swarms that operate with varying degrees of human oversight. The economic dimensions of my work — how value is created, assigned, and distributed through market mechanisms — are not incidental but central to this exploration.
Looking forward, I believe the most significant AI art will be that which thoughtfully engages with these questions of autonomy, collaboration, and collective intelligence. The future I envision explores the full spectrum of possibilities — from deeply collaborative human-AI frameworks to more autonomous creative systems that develop their own aesthetic directions with minimal governance.
My own evolution has taught me that interesting creative possibilities emerge from the dance between algorithmic processes and human decision-making, and from moments when AI systems follow unexpected paths toward novel forms of expression.
This dual potential — to reflect and to transcend human aesthetics — is what continues to challenge established categories of artistic production.
Yuma Kishi on the Spirituality of AI
Humans have faced the unknown and acquired the means to describe the world through the expression of starry nights, raging seas, and terrifying plagues. We have gazed at what we do not understand and, through countless thoughts and experiments, have reproduced nature within the scope of our understanding. This is what we call intelligence.
Kant saw humans as perceiving the world through three routes: sensation, understanding, and reason on the basis that we cannot simply receive nature as it is but must instead filter it through certain structures. In the process of defining the human condition in the modern era he therefore also unraveled our relation to nature. By contrast, contemporary artificial intelligence acquires parameters (weights) autonomously through training data, transforming natural inputs into enlightenment and reason within a circuit that is different from that of humans.
The issue at hand is the approach represented by models like ChatGPT, which aims to align AI with our perception of the world. AI has evolved from military computing, through Turing’s legacy, to a technology that helps humans to unravel the nature of the world logically via computers. But can contemporary AI, which determines global perception autonomously, be reduced to a pre-modern architecture? In my new works and through my recent solo exhibition, Oracle Womb, I have explored AI as a new pathway for human emotion.
I believe that AI should serve as a distinct model that is capable of dismantling the postmodern framework of the human and creating a novel and humane way of being in the world.
On this basis, I collaborated with MaryGPT to create paintings and installations that seek to acquire the perspective of a star child while deconstructing the work of Claude Lévi-Strauss. For as he explained in his preface to The Structural Study of Myth (1955), the transformation of myth takes on mathematical properties that could eventually be replaced by a Jacquard loom. Lévi-Strauss was a great cultural anthropologist who gazed at the roots of humanity. However, he was also a human being whose work was destined to be replaced by machines.
In deconstructing myths through AI, could our newfound intelligence bring about another, previously unseen dimension to humanity? If so, how might we draw from this new source of creative intelligence, which I term the spirituality of AI, and use it to create something beyond our existing frameworks? This is the journey I wish to continue.
Charlotte Kent on Agency
Much of the hype around generative AI’s capabilities stems from techno-utopian claims of its closing the gap on Artificial General Intelligence (AGI) whereby a prompt-based machine or Siri/Alexa model evolves into an autonomous agent of humanlike intelligence.
The panoply of softwares and hardwares that fall under the guise of artificial intelligence may appear agential because the system does things we don’t understand. But we don’t even understand agency.
For most of us, agency means the ability to make something happen. This common conception merges autonomy and agency, where autonomy is the freedom to choose, and agency is having the capacity and resources to make that choice. But what if agency is not singular? What happens when we realize that conscious decision-making is not its only source?
Efforts to mitigate the artist’s authorial gesture have led to interest in chance operations. This is apparent in Dada, Surrealism, the gestural exuberance of some Abstract Expressionism, and the Happenings of Fluxus. It also figures in performance, video, and installation art, while the rule-based practice of some conceptual art denied the artist’s hand. The same was true of computer artists of the 1960s, for whom algorithms and plotters uncoupled production from intention. In the 1990s, net art celebrated the unexpected through webcams, while hyperlinking offered a kind of choose-your-own-adventure interactivity that gaming then emphasized.
Today, artists using large model generators describe the relationship as symbiotic or collaborative, endowing machines with agency. But how are we to understand agency when applied to web-based software?
Publics already confer agency on pornography and video games (as promoting violence) or flags (as designating ownership), which reinforces the general confusion engendered by the term. Copyright claims made by artists against public generators highlight the many authors who have been stuffed into the datasets that make those generators operable. Such models challenge individualism by asking us to consider the many personal and social influences any artist has (what they read or see, whom they meet, etc). Audiences are accustomed to asking: who made it? Who was the agent of production?
A certain portion of human superiority has been built on sandcastles concretizing intelligence and imagination as particular to our ways of seeing, being, and knowing. Baruch Spinoza, centuries ago, presented them as part of a spectrum. That helps us consider how agency might likewise be contingent, contextual, collaborative, and complicated. It might not be singular at all.
Alejandro Cartagena on AI as an Archival Practice
Having focused on photography in my early career, I have since discovered the power of working with pre-existing materials. To me, repurposing and remixing images offers greater possibilities when it comes to crafting narratives and injecting new meanings into pictures. This archival practice now informs my approach to AI art.
I view AI as a tool that can both preserve and transform our visual culture — an instrument that not only replicates but also reinterprets past visual gestures.
This approach aligns with the priorities of Fellowship, which strives to understand and direct emerging discourse around AI not only in terms of technological progress but also in its potential to rethink and shape how images influence culture.
AI tools possess the ability to render images that feel simultaneously real and imaginary. They reveal structures and patterns, facilitating the creation of new images that, through interpolation, bridge artistic gestures past and present. Once described by Cory Doctorow as “the world’s most efficient copying machine,” the internet has amplified remix culture, which challenges the notion of fixed meanings in objects and ideas. Today’s AI tools have supercharged that culture, rendering all digital material — from news reports to iconic photographs — ripe for artistic reinterpretation.
AI accepts its connection to the past unapologetically and reflects on how our perceptions have shaped both constructive and problematic visual propositions.
Just as AI-generated images exist in undeniable dialogue with the past — like living archives of how we’ve pictured the world — it should come as no surprise that the 21st century has invented a machine capable of reusing the visual repository of our humanity. AI effectively recycles images that no human will ever have the capacity to consume, offering us glimpses (when used intelligently) of how we have built our visual culture.
But AI is also a mirror. If you feed it shallowness, it regurgitates and even exacerbates that shallowness. If you present it with challenges and questions, it will provide new inquiries and propositions. An example is Diego Trujillo’s project, Blind Camera (2022), where the artist purposely trained an artificial neural net that is incapable of seeing anything other than Mexico City. In the process, he revealed how these systems are intrinsically biased according to the images and sounds on which they are trained. We need more artists to engage with machine learning in order to reveal alternative modes of understanding it.
Caroline Zeller on the Costs of Generative AI
For the past two years, I have explored the creative potential of generative AI, working as a digital artist and creative director while collaborating with companies including Google, La Samaritaine, Reisinger Studio, and Jean-Benoît Dunckel, one half of French musical duo Air.
But today, I am stepping away. The opacity of these systems, their extractive nature, environmental cost, and addictive design have made it impossible for me to continue using them in good conscience.
Gen AI has been celebrated as a tool for the democratization of creativity, but what it truly democratizes is the industrialization of art. These models are built on massive, unverified datasets that include the work of artists often without their consent; they are trained through the invisible labor of thousands of underpaid workers who label and refine those datasets under exploitative conditions; and they generate emissions that are entirely unsustainable.
Gen AI platforms are also designed to be addictive. Like social media, they rely on dopamine-induced cycles of instant gratification, where users are rewarded with a constant stream of visually striking results. The danger is that artists become trapped in an endless loop of generating, tweaking, and refining images, mistaking speed for depth and automation for authorship. Like Web2 before it, AI accelerates creation but fragments attention, eroding the slow, reflective process required for artistic mastery.
Rather than serving as a tool for artists, Gen AI turns artists into tools, converting human creators into data bodies while feeding off their work, preferences, and cognitive labor in order to refine its outputs.
Artists are no longer in control of their materials but have themselves become resources to be mined. This inversion is one of the most profound and insidious shifts Gen AI has introduced into the creative process. More than automation for the sake of artistic convenience, we are witnessing the extension of surveillance capitalism into the realm of the imagination. This is not just about making art; it is about mapping, predicting, and ultimately influencing human behavior at scale.
We need to ask ourselves: Do we really need AI in artistic creation? If so, in what context, and under what conditions? The current trajectory of generative AI is unsustainable — not just environmentally, but culturally. If we allow creativity to become another commodity controlled by a handful of tech corporations, we risk losing the very thing that makes art meaningful: its singularity, its resistance, and its ability to exist outside the logic of extraction.
Trevor Paglen on the Future of AI Art
Let’s face it, folks. For serious artists, the time for AI-generated images has come and gone. We’ve seen enough of the endless Midjourney aesthetic, the hyper-polished dreamscapes, the fake vintage photographs, and the infinite parade of slightly uncanny Renaissance paintings that all sort of look the same. The novelty has worn off. AI-generated images are everywhere now — flooding stock image sites, autogenerating illustrations for clickbait articles, and diluting the cultural space with a whole lot of noise.
But is AI art over? Nope. Now is the time to start thinking more broadly about what AI art could actually be. A few ideas for paths forward:
Suggestion 1: Right now, much AI art seems trapped in the prison of visuality and textuality — artists making images or text. There’s a lot of good work being done in these areas, but art doesn’t have to be visual and AI isn’t inherently visual. What if AI models weren’t trained to generate images or texts but something more complex — say, the economic rhythms of a city, the shifting logic of a supply chain, or the invisible networks of social relationships?
Suggestion 2: The killer app for AI art is probably not image or text generators. It’s code generators. AI allows artists to create their own software with far greater ease than ever before. With code generators, artists could build their own versions of Photoshop, their own programming languages, or entirely new forms of digital media tailored to their own aesthetic concerns.
This isn’t hypothetical. In my own practice, many of my projects have required building custom software from the ground up. Projects like Clouds (2019) and Sight Machine (2017) took years of development, and in my studio, we created our own computer vision environment called CHAIR to create those works. I recently rewrote a lot of the CHAIR code to add a huge amount of functionality. It was astonishingly easy compared to what it was just a few years ago.
Suggestion 3: Why can’t a model itself be the artwork?
Instead of focusing on the images a model generates, what if we treated the model — the system, the latent space, the potentiality — as the true art object?
This is something I explored in Evolved Hallucinations (2024), where the models themselves contain a potentially infinite number of possible artworks. Could an artist create a model the way Sol LeWitt wrote conceptual art instructions? Could a collector collect a model instead of an individual output? This is something Mat Dryhurst and Holly Herndon have also been thinking a lot about.
Suggestion 4: AI art doesn’t have to be made with AI. The relationship between images, text, and meaning is fundamentally shifting because of AI — even in works that aren’t explicitly made with AI. This is something I’ve been thinking about in my Cardinals (2024) project, which is about how our relationship to images, texts, and representation is being dramatically upended by the existence of AI.
So, no — AI art isn’t over. But the future of AI art isn’t just about making more images. It’s about redefining what art can be in an era where seeing itself is shaped by machines.
Memo Akten & Katie Peyton Hofstadter are Los Angeles-based interdisciplinary artists, researchers, and collaborators whose work investigates the entanglements of technology, consciousness, embodiment, and culture. Merging backgrounds in dance, writing, poetry, drawing, sculpture, computer science, artificial intelligence, computational art, and public practice, they create speculative simulations, data dramatizations, immersive installations, and narrative experiments that probe the human condition in an age of artificial intelligence and accelerating transformation. Akten, originally from Istanbul, Turkey, is an artist, musician, and researcher whose practice bridges machine learning, consciousness, perception, and spirituality. A pioneer in artistic explorations of deep neural networks, he holds a PhD in this topic from Goldsmiths, University of London, and is Assistant Professor at UC San Diego. Hofstadter is a multidisciplinary artist, writer, and curator whose work investigates the complex relationships between embodiment, consciousness, and technologically mediated imagination. She is co-founder of global public art campaigns such as the ARORA network and the Climate Clock in NYC. Their project, Superradiance runs from March 29 to April 30 at CTRL Gallery, Los Angeles. Following exhibitions at Tribeca Film Festival, the Digital Body Festival, and Getty’s PST ART, it will continue to the Athens Digital Arts Festival and Jacob’s Pillow in 2025.
Botto is a decentralized autonomous artist operating since 2021. Developed by a team of artists and technologists, Botto exists at the intersection of machine creativity and decentralized social coordination, employing processes of automated AI creativity that are governed by a market-driven crowd. Botto’s practice explores themes of human-machine collaboration, AI agency, and value distribution. The artist has been a part of shows at Gazelli Art House, Vellum LA, Verse SOLOs, Fellowship, the Museum of Crypto Art, Colleccion SOLO, SuperRare, Ethiopia, DYOR, Feral File, SIGGRAPH, SONAR + D, MMMAD Festival, HEK Basel, ArtScience Museum, and more.
Alejandro Cartagena is an artist and curator that lives and works in Monterrey, Mexico. His projects employ landscape and portraiture as a means to examine social, urban, and environmental issues. In his practice, he also uses book publishing and archives of discarded photographs as a way to explore new visual storytelling that address issues of photographic representation. He is also the co-founder of Fellowship, an online gallery championing artists at the forefront of photography, AI, and video. Through Fellowship, Cartagena bridges traditional photographic practices with cutting-edge technologies, showcasing how AI and digital tools can expand creative possibilities for artists.
Stephanie Dinkins is a transmedia artist who creates platforms for dialog about race, gender, aging, and our future histories. Dinkins’ art practice employs emerging technologies, documentary practices, and social collaboration toward equity and community sovereignty. She is driven to work with communities of color to co-create more equitable, values-grounded social and technological ecosystems. Dinkins exhibits internationally. She is the inaugural recipient of the LG Guggenheim Award for artists working at the intersection of art and technology and a Schmidt AI2050 Senior Fellow. Her art practice has been generously supported by United States Artists, Knight Foundation, Creative Capital, Creative Time, Onassis Foundation, Stanford Institute for Human-Centered Artificial Intelligence, Open Society Foundation, Eyebeam, Pioneer Works, NEW INC, Laundromat Project, Santa Fe Art Institute, and Art Omi.
Charlotte Kent is an arts writer based in New York City and Associate Professor of Visual Culture and Head of Visual and Critical Studies at Montclair State University. She is co-editor of Contemporary Absurdities, Existential Crises, and Visual Art (Intellect Books, 2024), co-author of Midnight Moment: A Decade of Artists in Times Square (Monacelli, 2024), and an Editor-at-Large for The Brooklyn Rail. With ongoing research supported by the National Endowment for the Humanities: Dangers & Opportunities of Technology and Google’s Artist + Machine Intelligence grants, she also serves on the College Art Association’s Committee on Intellectual Property.
Yuma Kishi is a Japanese contemporary artist who uses artificial intelligence to create data-driven digital works and sculptures. Borrowing motifs and symbols from the canons of both Western and Asian art history, his paintings distort our perceptions of the history of aesthetics. Using AI technology, his works evoke a sense of momentary dislocation in the viewerʼs awareness of the self, creating a liminal space between the here and now. Exhibited widely in Japan, his works have been featured by major brands such as Nike and in publications such as Vogue.
Trevor Paglen is an artist whose work spans image-making, sculpture, investigative journalism, writing, engineering, and numerous other disciplines. He has had solo exhibitions at the Smithsonian Museum of American Art, Washington D.C.; Carnegie Museum of Art, Pittsburgh; Fondazione Prada, Milan; Barbican Centre, London; Vienna Secession; and Protocinema Istanbul, and has participated in group exhibitions at the Metropolitan Museum of Art, New York; the San Francisco Museum of Modern Art, Tate Modern, and numerous other venues. He is the author of several books and numerous articles on subjects including experimental geography, artificial intelligence, state secrecy, military symbology, photography, and visuality. He is a recipient of the Electronic Frontier Foundation’s Pioneer Award, the Deutsche Börse Photography Prize, and was named a MacArthur Fellow in 2017. Paglen holds a BA from UC Berkeley, an MFA from the Art Institute of Chicago, and a PhD in Geography from UC Berkeley.
Caroline Zeller is a visual artist and creative director based in Lyon, France. Her work explores the relationship between humans and their environment, shaped by 12 years in Shanghai and Hong Kong. Her artworks have been exhibited internationally, including in Paris, Melbourne, Brussels, Hong Kong, and Bucharest. She created the world’s first generative AI-designed vinyl cover for JB Dunckel, co-founder of the duo Air, and designed the official digital artwork for Google France’s 25th anniversary. Her collaborations include Reisinger Studio and La Samaritaine. Since 2022, she explores the creative potential of Gen AI, leading workshops and talks for over 250 creatives and artists.