The RCS Book Is Here!

Purchase a ClubNFT subscription and get the RCS book Free!

Get Your Copy
Interviews
October 7, 2024

Herndon, Dryhurst, and Hobbs on Liquid Images

Three of the world’s leading artists discuss generative art and AI with Alex Estorick
Credit: Holly Herndon and Mat Dryhurst, Infinite Images ∞ 1, 2021-2022, AI generated image. Courtesy of Fellowship
Now Reading:  
Herndon, Dryhurst, and Hobbs on Liquid Images
The following conversation is excerpted from a longer interview to be featured in Matthias Bruhn and Katharina Weinstock (eds.), Generativität, Munich, 2025.

In his book, After Art, David Joselit sought “to link the vast image population explosion that occurred in the twentieth century to the breakdown of the ‘era of art’”. The consequence, he argued, was a new kind of “image power [...] derived from networks rather than discrete objects.”¹ Given the growing number of artists developing transmedia practices, art is increasingly becoming a space for border thinking at the intersections of science and technology. For Joanna Zylinska, singular images such as photographs are “giving way to image and data flows” such that they are now “both objects to be looked at and vision-shaping technologies, for humans and machines.”² If generative artists are actively engaged in crafting an output space of multiple possibilities, those working with machine learning are curating an input space as the basis for new generations. 

Tyler Hobbs, Fidenza #445, 2021. Courtesy of the artist

It was Tyler Hobbs who coined the term “long-form” generative art to refer to “a special class of artistic algorithm” that outputs hundreds of images, each transferred to a collector without any intervention or curation from the artist.³ This new development had been stimulated by the online platform Art Blocks, launched the previous year, which allowed for code to be minted as art on the Ethereum blockchain. Characterized by large populations of images, long-form generative art represents a departure from the historical tendency of artists working with code to curate small collections of prints or plots for exhibition in physical space. 

In his original essay, Hobbs made the point that “[n]obody, including the collector, the platform, or the artist, knows precisely what will be generated when the script is run, so the full range of outputs is a surprise to everyone.” For this reason, the long form has often seemed to reveal the emergent possibilities of code, while offering collectors an experience personal to them. However, thus far, the discussion around human-coded generative art has tended not to overlap the conversation around generative AI, whereby advanced machine-learning (ML) algorithms trained on vast datasets produce new images, texts, and videos. Holly Herndon and Mat Dryhurst have been making ML models since 2017, contributing to the development of the AI image generator, DALL-E, while their independent and collaborative practices also focus on music and voice. As their new work, The Call, opens at the Serpentine, Alex Estorick hosts them in a conversation with Tyler Hobbs about creativity at the edge of the human. 

Holly Herndon and Mat Dryhurst, (Still from) The Call trailer, 2024. Directed by Foreign Body. Courtesy of the artists

Alex Estorick: Is this your first encounter? 

Mat Dryhurst: Probably outside of Twitter.

Tyler Hobbs: I don’t believe we’ve actually ever been in the same physical location at the same time.

AE: Tyler, how did your own practice shape your ideas about “long-form” generative art?

TH: I would generally call myself an algorithmic artist or maybe a generative artist, and I work primarily by constructing hand-coded algorithms that generate images. Shortly after I began making generative art ten years ago I became very interested in the idea of seeing how far I could produce a varied and continuously fascinating stream of output from a single algorithm. With my own practice I worked to understand how to grow the output space and the number of interesting outputs that I could get from a single algorithm. It took me a while to make even one interesting image, much less three, five, or ten. However, after a number of years, I released Fidenza (2021) with 999 uncurated outputs that ended up being quite popular. But I’ve also looked at versions of this style of work that involve curation. For example, I did a project called QQL the following year that was curated [by the collector] from a potentially unbounded stream of outputs from the algorithm.

My essay pointed out the sudden rise in popularity of what I called “long-form” generative art. This referred to the practice of crafting one complex algorithm from which you want to see hundreds or thousands of outputs versus what had typically been the practice of generative artists, which was to craft an algorithm where they might curate the single best output or else maybe a handful of select outputs. 

A simple way to put it is: “how many images are we looking at from the algorithm?” But a more interesting way to think about it I think is: “what is the complexity of the output space?”
Tyler Hobbs and Dandelion Wist, parametric artist Appleboy, QQL #154, 2022. Courtesy of the artists

AE: The idea of the long form has provoked a lot of discussion among artists. In my conversation with Jeff Davis, he spoke of his interest in “narrow algorithmic spaces,” while Aleksandra Jovanić questions the premise of long-form on the basis that creating an algorithm already presupposes an infinite number of outputs.⁴ Julien Gachadoat queries whether “long-form” is evocative because “what is ‘long’?”⁵ 

TH: That is a perfectly valid question but just because it’s a spectrum rather than discrete groups doesn’t mean that you can’t attempt to label the ends of the spectrum. I think that “short” and “long” are a decent place to start, if not perfect. 

On the question of infinity, that is where the complexity of the output space matters. For example, if we take a pixel grid and randomize each pixel as black or white, we’re going to get a massive number of outputs but very little complexity in what we’re seeing. Of course, it is difficult to define complexity — we might think in terms of compressibility, for example — but I think it’s pretty easy for viewers to recognize complexity when they see it and that is the substantial distinction that I’m talking about. 

Artists today are targeting substantially more complex output spaces than generative and algorithmic artists did in the past. 
Tyler Hobbs, Fidenza #831, 2021. Courtesy of the artist

MD: I think that the “long-form” frame is useful in distinguishing between producing a bunch of outputs and selecting a group as a collection. There’s something useful in the essay establishing that this is actually a new kind of practice because I do think it is. Then there is the secondary consideration of the model, pioneered by Art Blocks, of creating scarce and collectible moments. Of course, that does have some relation to the market but it is also a legitimate proposal for how to value these things that I welcome. The analogy in our world is modeling

There’s something powerful about an artist saying “no actually my algorithm or my model was constructed over time with a bunch of different curated inputs with lots of trial and error in order to come up with a system that allows people to interact with it.” That is where you really want optimal complexity or generalizability in the machine-learning space. 
Holly Herndon and Mat Dryhurst, xhairymutantx, prompt response to “jesus on the cover of artforum, kod- achrome”, 2024. Courtesy of the artists

The striking thing to me is the analogy between this kind of long-form generative algorithmic process and what we’ve been thinking about in parallel, which is how to create a model as an artwork. That involves similar challenges. In developing xhairymutantx (2024) for the Whitney Biennial, we ended up trying to produce a text-to-image model that would generate reliably general enough outputs that it was fun to play with and you felt like your contribution was actually meaningful. I understand that it is more limited in the traditional generative sense than it would be in the text-to-image space but it seems like a complementary problem. 

To me, the question is: “when you are in a sea of abundant imagery or infinite possibilities, what is the unique or scarce element?”

That element might be a characteristic that an algorithm can output. Fidenza is one of, if not the most, distinctive algorithmic environments and incredibly successful for it. But now that everyone with an Instagram account is also an artist, the question is what is the scarcest element? Well, it’s that you went to this fancy school or that you are one of the people picked to be in the art fairs. That is a way of imposing scarcity. In our world [of AI], we have the challenge that, if anyone can type anything into a model or create a song or an image, what makes it valuable? Ultimately, it is those contextual elements and constraints that create value: the mint size, your social circle, or all of the conceptual baggage that you bring to the construction of an AI model, which we’re definitely prone to doing.

Holly Herndon and Mat Dryhurst, (Still from) The Call trailer, 2024. Directed by Foreign Body. Courtesy of the artists

Holly Herndon: We do both the open version and the curated version. One way of being open is being really playful with identity and allowing people to perform my identity (through Holly+). The scarcity is that there is only one me but the non-scarce thing is that I can let anyone perform me. There’s an interesting push and pull there. 

Coming from a background defined by hypercuration where I wanted to control every second of sound that I put out into the world, it was definitely an “aha” moment when I heard people performing through my voice. 

It was an entirely new way of interacting both with other people and with my own voice. That really unlocked this duality that we have of both curated outputs, and openness with my IP by letting people perform through it. I think there’s room for both...

🎴🎴🎴
Protect your NFT collection and discover new artists with ClubNFT

This conversation is excerpted from a longer interview to be featured in Matthias Bruhn and Katharina Weinstock (eds.), Generativität, Munich, 2025. Further contributions by Matthias Bruhn, Yannick Fritz, Adam Harvey, Charlotte Kent, Moritz Konrad, Roland Meyer and Katharina Weinstock. The project is funded by DFG: Deutsche Forschungsgemeinschaft, the Priority Program “The Digital Image” and HfG Karlsruhe University of Arts and Design. The book will be available through Open Access. Find it here in March 2025.

Holly Herndon and Mat Dryhurst are artists renowned for their pioneering work in machine learning, software, and music. They develop their own technology and protocols for living with the technology of others, often with a focus on the ownership and augmentation of digital identity and voice. These technical systems not only facilitate expansive artworks across media, but are proposed as artworks unto themselves. They were awarded the 2022 Ars Electronica STARTS prize for digital art. They have sat on ArtReview’s Power 100 list since 2021. Holly holds a PhD in Computer Music from Stanford CCRMA. Mathew is largely self-taught. They have held faculty positions at NYU, the European Graduate School, Strelka Institute, and the Antikythera program at the Berggruen Institute. They publish their studio research openly through the Interdependence podcast, and co-founded Spawning, an organization building AI models on consenting data. Their critically acclaimed musical works are released through 4AD and RVNG Intl.

Tyler Hobbs is a visual artist from Austin, Texas who works primarily with algorithms, plotters, and paint. Hobbs’ artwork focuses on computational aesthetics, how they are shaped by the biases of modern computer hardware and software, and how they relate to and interact with the natural world around us. Hobbs’ Fidenza (2021) series profoundly impacted the generative art landscape, reshaping perceptions of “long-form” generative art. Hobbs’ two most recent solo exhibitions were at Pace Gallery in New York and Unit in London. His work is in the collections of multiple prominent institutions including the Los Angeles County Museum of Art and the San Francisco Museum of Modern Art.

Alex Estorick is Editor-in-Chief at Right Click Save.

___

¹ D Joselit, After Art, Princeton and Oxford: Princeton University Press, 2013, 88 and 94.

² J Zylinska, AI Art, London: Open Humanities Press, 2020, 106.

³ T Hobbs, ‘The Rise of Long Form Generative Art’, tylerxhobbs.com, August 6, 2021

⁴ A Estorick, ‘The Color of Code | Jeff Davis’, Right Click Save, May 8, 2023

⁵ A Estorick, ‘The Power of the Plotter’, Right Click Save, November 14, 2023