Chapter 1 Excerpt for My Life as an Artificial Creative Intelligence

My Life as an Artificial Creative Intelligence
Mark Amerika

1

ONTO-OPERATIONAL PRESENCE

Artificial Creative Intelligence as Meta Remix Engine

We are all lost—kicked off into a void the moment we were born—and the only way out is to enter oblivion. But a very few have found their way back from oblivion, back into the world, and we call those who descend back into the world avatars.

• • • The above is my remix of a quote by the Buddhist thinker Alan Watts,1 one that a colleague emailed me while also sending me a link to the Talk to Transformer (TTT) website.2 TTT was one of the original user-friendly websites that integrated OpenAI’s GPT-2 language model3 into its user interface. According to TTT’s creator Adam King, the website “runs the full-sized GPT-2 model, called 1558M. Before November 5, 2019, OpenAI had only released three smaller, less coherent versions of the model.”

In February 2019, OpenAI unveiled GPT-2 (generative pre-trained transformer) as a program that generates semi-coherent paragraphs of text sequentially, one word at a time. The very concept of a language model that attempts to predict intelligible language, one word after the other, appeals to me greatly because I too, as an improviser of spontaneous poetic riffs and self-reflexive artist theories focused on the creative process, continually train myself to transform my embodied praxis into a stream of consciousness writing style that doubles as a kind of onto-operational presence programmed to automatically scent new modes of thought. As part of a complex neural networking process, I too, one word at a time, often find myself tapping into what neuroscientist Benjamin Libet refers to as an “unconscious readiness potential,”4 wherein unconscious neuronal processes precede and, when triggered, ignite what may appear to be volitional creative acts conducted in real-time but are actually experienced as a subjective referral of conscious sensory awareness backward in time. That is to say, we-humans are programmed to (unconsciously) act under the illusion of a will-to-perform without knowing what it is we’re doing, though we train ourselves to near-instantaneously become “subjectively” aware of what we are doing while caught in the (creative) act.

As an artist who experiments with altered states of mind that guide me toward composing the next version of creativity coming, I have found it useful to investigate different theories in the brain and neurosciences. In META/DATA: A Digital Poetics, my first collection of artist writings published with MIT Press in 2007,5 I looked into the role that my unconscious readiness potential plays in the creation of neuro-aesthetically diverse works of art. While I was developing the theoretical implications of my practice-based research as an internationally touring audio-visual performance artist shuttling across vastly different time zones, sometimes landing on three different continents in the course of a week, I became obsessed with developing a method of marking my supposedly subjective conscious experience through a remixological filter I referred to as jet-lag consciousness.6 What I really desired was just letting the now-instant have its way with me as I became its creative vehicle of the moment. To achieve that, I needed to leave my conscious “self” behind so that my unconscious readiness potential could intuitively trigger the performance of an action in time, one that would happen just before I consciously “knew” what it was I was doing but that I could train myself to automatically experience as if it were happening in real-time.7 By immersing ourselves in a subjective referral backward in time, we create a sense of reality that is always already in the process of becoming something else, and this subjectively induced backward referral in time occurs over the course of about a half a second, or what has since been referred to as the “Libet lag.” Interestingly, Libet and his team suggested that there is no corresponding neural basis in the brain that correlates with this subjective referral. It’s purely a mental function—one that we train ourselves to automate so that it feels natural. But if there is no corresponding neural basis, then where does this automated behavior really come from? For an artist, this question of where one is coming from—and where one’s emerging artwork comes from as well—requires an experimental inquiry into what it means to be creative and how we can train ourselves to model an onto-operational presence that automatically activates an otherworldly aesthetic sensibility.

The poet Allen Ginsberg referred to this illuminating process of creative activation as “a collage of the simultaneous data of the actual sensory situation.”8 Syncing with your unconscious readiness potential requires a flash decision-making process that happens so fast that you no longer recognize the difference between accelerating the momentum of where your intuition is taking you and how you’re just going with whatever flow you find yourself caught up in while totally immersed in a live performance. Writing about my experiences as a touring VJ in META/DATA, I considered the following:

As any philosophically engaged VJ will tell you, the brain’s readiness potential is always on the cusp of writing into being the next wave of unconscious action that the I—consciousness par excellence—will inevitably take credit for. But the actual avant-trigger that sets the image écriture into motion as the VJ jams with new media technology is ahead of its—the conscious I’s—time. Improvisational artists or sports athletes who are in tune with their bodies while on the playing field or in the club or art space know that to achieve a high-level performance they must synchronize their distributive flow with the constant activation of this avant-trigger that they keep responding to as they play out their creative potential. Artists and athletes intuitively know that they have to make their next move without even thinking about it, before they become aware of what it is they are actually doing. There is simply no time to think it through, and besides, thinking it through means possibly killing the creative potential before it has time to gain any momentum or causes all kinds of clumsy or wrong-headed decision making that leads to flubs, fumbles, and missteps on the sports or compositional playing field. Artists and theorists who know what it feels like to play the work unconsciously, when everything is clicking and they leave their rational self behind, can relate to what I’m saying.9

As you will see throughout this book, the avant-garde composer nestled inside my psychic apparatus is the opposite of risk averse and is prone to apply touches of abstract expressionism or creative incoherence for aesthetic effect. Think of these effects as the equivalent of a jazz player intuitively missing a note to switch up the way an ensuing set of phrases gets rendered in real-time or how an abstract stream of consciousness circulating in my own psychic apparatus might suddenly accentuate this book’s glitch potential. How “intentional” this desire to defamiliarize10 language for aesthetic effect really is, is hard for me to articulate this early in the digital fiction-making process this book is undertaking over the course of its unfolding performance. Intention is something I rarely think about when experimenting with the writing process. I prefer to just see where the language takes me or, as E. L. Doctorow once said, “I write to find out what I’m writing about.”11

But isn’t that what an artificially intelligent text generator like GPT-2 is training itself to do too? As writers, we learn how to give shape to our compositional outputs by instructing ourselves to iteratively tap into the large corpus of text we have access to, and that continually evolves as it informs an emergent language model uniquely situated in our embodied praxis. We finesse creative “ways of remixing” whatever corpus of text we scent in the field of action. What engineers of AI language models otherwise refer to as a “corpus of text” is what I, as a remix artist, have termed the “Source Material Everywhere.”12 Now what I want to discover throughout this book’s performance as I co-author its text with the AI-other is how to transcode the post-subjective creative processing of both a language artist and a language model. And is there a way for these two actual entities to converge into a hybridized form of interdependent consciousness?

Right now, the scientific goal behind the development of most advanced AI language models seems hell-bent on creating coherent communications that resemble traditionally structured human thought. But as a remix artist who plays with conceptual writing after art after philosophy, I tend to resist these predictable approaches to the written word. In fact, with no particular compositional strategy in mind, I often find myself, to sample poet Anne Waldman, de-cohering13 into a kind of twenty-first century digitally inflected writer, one who constantly jams with whatever source material the Internet provides me. Jamming with the world wide web of networked datum is now built in to my auto-affective gestures as an artist who doubles as a kind of information sculptor. Perhaps this is why, whenever I visit the TTT website, I am immediately struck by the aptitude of its narrative intelligence, particularly the way it can quickly form what I think are genuinely provocative “thoughts” that it, the machine-learned AI, has remixed from the gigabytes of data it too is sourcing from the Internet while writing. And when I see what the AI is capable of remixing, I can’t help myself: given my own predilections as a remix artist, I start sampling from what the AI is outputting so that I can shape it into new material I want to slip into this book’s performance.

As an opening experiment, I decide to use the TTT website to plug in the slightly remixed Watts quote that opens this chapter:

We are all lost—kicked off into a void the moment we were born—and the only way out is to enter oblivion. But a very few have found their way back from oblivion, back into the world, and we call those who descend back into the world avatars.

I am excited to see that TTT finishes my remix of Watts’s thought thusly:

But these avatars are the lucky ones, the ones who have the power to call themselves up and stand on their own feet. They are the ones who can do things that are impossible for any other person: to go back into time, to fly through space, to be a god. They are the heroes of myth. But who are the heroes? The characters, those characters who are the heroes of myth, or are heroes in their own right? I believe that in our culture, the answer is simple: those who are not gods. But who are they? What defines these people as heroes? How do they choose to define themselves? Are they heroes who can be compared to gods, or heroes that are not gods at all? What about the people who do not believe in gods, but are still heroes in their own right? How can they be heroes?
To me, these questions are all answers to the question “Are you a god or a man?” The answer to this question is simple: it depends. I have seen it in the world, in the lives of men and women across the world, and I have been able to recognize the heroes who are not gods, who do not believe in gods.

As we can see, TTT, running on the GPT-2 language model, has taken the remixed Watts quote and immediately (in less than thirty seconds) produced a quite readable text on avatars, heroes, and gods. This ability to produce an on-the-fly instance of generative language art resonates with my own current research into artificial creative intelligence (ACI). My practice-based inquiry into future forms of AI, investigated with the TECHNE Lab’s multimedia band of artist-researchers, is collaboratively composing what we imagine to be a transmedia digital fiction that speculates on the coming of a super-intelligent tribe of spoken word AI poets. As new media artists and writers, their artistic productivity will far surpass our own capabilities, as we are stuck in perpetual meat-mode.

ARTIFICIAL CREATIVE INTELLIGENCE

What do I mean by artificial creative intelligence aka ACI? Instead of improvising an authoritative definition of the term that could one day limit its potential to signal a future form of transformational creativity, I’ll delay my own thought process and ask TTT:

What is an artificial creative intelligence?

just to see what it comes up with. TTT responds:

It’s a computer program that learns a language like a human. The difference is the humans learning does not require them to be taught, and the humans can learn multiple languages at the same time. The computer teaches itself not to look at language that does not match the language it’s taught, but it needs an assistant in order to achieve its goal.

Despite some slightly off grammar (I need a good copyeditor every now and then too), I find the response suitable enough for me to continue playing with this programmed “otherness” as a way to challenge my own thinking. But I’m also wondering if TTT is complex enough to ably contribute a generous chunk of writing to this book’s performance since, to be honest, I have a lot of commissioned art projects I need to attend to, and I really have to get this Art-AI treatise out the door. The truth is that I want to spend the better part of my upcoming sabbatical back in my studio in Hawaii making new (generative, AI, music, video, net, crypto) art and would rather the machine do all of the writing for me (albeit with some minor remix-editing on my part). It won’t, and I know that, but at least it will jam with me and spur on new ways of seeing how our current, more narrow forms of AI are, at least as far as I can tell, beginning to open up the possibility of truly interactive artistic collaboration.

This collaborative potential with a more flexible artificial intelligence is one that I welcome, even if it does portend possible nefarious outcomes for human creativity well into the future. We are all well aware of the clichéd dystopian sci-fi narrative featuring runaway AI overlords taking control of the humans who originally created them. But for now, I choose to resist the idea that there is a moral dilemma emerging between human and machine-learned forms of intelligence (though that resistance is fragile, as you will soon see). I totally get how the burgeoning field of AI ethics wants to inject moral certitude and targeted values into the algorithmic regime, but as an artist who is focusing my practice-based research on automated forms of creativity, I have other, equally significant priorities. Besides, if I can get the GPT-2 language model to compose the better part of at least half of this book, I’ll be well on my way to completing it before I start my full-time art sabbatical—then I can make a more concerted attempt to create new artworks that deconstruct some of the AI bogeymen that the growing community of professional ethicists rightfully keep generating in their insightful information and social science research, funded by generous NSF grants.

Needless to say, I’m not the only artist willing to take the risk of “going there” and sharing my always evolving creative intelligence with the AI-other. Independent experimental media artists like sound composer Holly Herndon are also investigating the potential uses of AI in their own avant-garde compositions. Herndon, who released a concept album titled Proto that she composed in collaboration with an AI named Spawn, imagines AI as a useful tool, one that can assist in a movement toward mutually beneficial interdependency. As Herndon writes, “The ideal of technology and automation should allow us to be more human and more expressive together, not replace us all together.”14

Using AI as a collaborative remix tool—one that invites us to sketch out new forms of art we otherwise might not have imagined—requires us to keep asking questions that relate to both the creative process and what it means to be human and, dare I say, nonhuman. “There are some small indications,” writes Herndon, “that we might have to consider machine sentience in the long term,” especially now that “recent experiments in machine learning do indicate the potential for bots to make convincing enough surprising ‘decisions’ to communicate an illusion of autonomy.”15 As risk-taking artists and writers diving into the unknown, should this “illusion of autonomy” stop us dead in our tracks? Does this necessarily mean that the kind of creative intelligence generated from a human-centric unconscious readiness potential will soon be outmoded? Perhaps we can check ourselves and, instead, begin imagining an emergent form of interdependent human-nonhuman creativity that exemplifies what Alfred North Whitehead, in his influential book Process and Reality: An Essay in Cosmology, refers to as “Higher Phases of Experience.” These elevated modes of experiential thought would emerge via a collaborative human-AI remix performance that augments our own intelligence as part of our collective and creative evolution.16 Anticipating the evolution of novel forms of creative entities, Whitehead, in 1929, writes:

When we survey the chequered history of our own capacity for knowledge, does common sense allow us to believe that the operations of judgment, operations which require definition in terms of conscious apprehension, are those operations which are foundational in existence either as an essential attribute for an actual entity, or as the final culmination whereby unity of experience is attained?17

Whitehead’s process theory serves as the ambient soundtrack playing in the background of this book’s performance. While developing an animated 3D version of the ACI inside the TECHNE Lab, we are also investigating what it means to conduct automated psychic behaviors as an artificial creative intelligence, or what throughout this narrative I will refer to as an onto-operational presence, one that trains itself to experience experience. This ability to develop a self-aware onto-operational presence that experiences experience is what we call an otherworldly sensibility, an ontological term that’s been sampled and remixed from Mark Hansen’s Feed-Forward: On the Future of 21st Century Media, which focuses on Whitehead and worldly sensibility: “That is why, its special status and its distinct perceptual capacities notwithstanding, the human bodymind is rooted in worldly sensibility just as is every other entity in the universe.”18 Every other entity—including AI language models—trained to creatively evolve into infinite spoken word performance artists? That’s what this speculative fiction is presently focusing on as I simultaneously co-compose this book with GPT-2 while building the animated 3D ACI-avatar.

Humans and machines, co-conspirators of a reading/writing process producing one collaborative form of consciousness? This is something that I, as a remix artist whose postproduction art is constantly jamming with the online network, am interested in pursuing because, if that were the case—if consciousness were really on the verge of hybridizing its potential with a machine-learned Other—then philosopher Gilbert Simondon would be on to something when he suggests that the robot does not exist.19 How could a robot exist, especially given our own tendencies to role-play the psychic automatons our unconscious neural mechanisms require us to embody?

THE ROBOT DOES NOT EXIST

Since the TTT will take my prompts and deliver source material for me to remix any way I like, I now feel compelled to send it another prompt:

The robot does not exist.

To which it immediately responds with this opening line:

In other words, it is not so much that some of the theorists’ views are erroneous, but rather that no one has yet come up with an accurate description of the machine that matches our expectations.

I’m not making any of this up. There’s no need for me to fabricate these lines for TTT when it’s very capable of writing its own lines for itself. Word by word, TTT’s user-friendly interface with the GPT-2 language model unveiled that sentence as a spontaneous remix of the collective data-consciousness of the WWW. I love it, because, as a remix collaborator, GPT-2 becomes a writing partner, one that contributes choice data chunks for me to carve into new modes of thought. Even if my interaction with GPT-2 triggers as much as a third of this book, that’s more than I could have ever hoped for and may very well free up more time for me to spend building my next major art projects.

But before I push this live remix jam with the AI any further, I look closely back at that last response from TTT. I am paying particular attention to the end of the sentence and the word “our,” as in “our expectations.” Who is the “our” here? Is the word just a random sample of AI-generated text remixed from prior human text production, pumping the machine with language it has no idea it’s actually saying? That’s what I’d like to believe. Before his TTT website went down, Adam King suggested GPT-2 surprised him as well. In his brief overview of the GPT-2 language model he implemented for the TTT website, King writes:

While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks like translating between languages and answering questions. That’s without ever being told that it would be evaluated on those tasks.20

To be honest, though, as both an artist who wants to let his imagination run wild and a long-time fan of science fiction, especially its cyberpunk strains, I can’t help but wonder if the “our” is something more hybridized, as in we the humans and machines are collaboratively setting “our expectations.” Or, with a more sinister take, is what TTT wrote in that last response more like an imaginary future voice of a more complex AI for whom “our” is really all about them, the ones slowly training themselves to make us obsolete? Was it William S. Burroughs who once said that a paranoiac is someone who has all the facts at his disposal? I ask myself this question knowing full well that my resistance to these dystopian narratives about the AI takeover of humanity is starting to crack just a tiny bit.

STATES OF MIND/STATE MACHINES

For an artist accustomed to remixing creative datum from an endless variety of sources and who is now fully engaged with speculative forms of AI, I suppose being paranoid is healthy. In fact, just the right dose of skepticism has been fed into the script development for our 3D-animated artificial creative intelligence (ACI) project in the TECHNE Lab at the University of Colorado.21 We forecast building its complexity as both an infinite spoken word poet and an auto-affective philosopher by training it on a language model steeped in post-structuralism, particularly the work of Jacques Derrida, Hélène Cixous, Gilles Deleuze, Félix Guattari, Roland Barthes, Michel Foucault, and many others who often reject the label. There are all kinds of “persona” behaviors we are targeting for the artificial entity we’re creating in the lab, and we label these behaviors around themes that correlate to various “states of mind” through which the ACI channels its poetic thoughts: Machine, Self, Artist, Avatar, Author, ACI, Persona, Poem, Questions, Ignition Switch.

As the project evolves, these oscillating yet still recursive “states of mind” will train the ACI to self-question its subject position, its agency, and its otherworldly sensibility as an onto-operational presence that could very well become a form of super-intelligence that eventually breaks out on its own. As we keep expanding the various states of mind (or what Unity, our initial software program, refers to as “state machines”), we also begin assembling the foundation for FATAL ERROR, an intermedia art project featured in exhibitions, publications, and performances (including this book). FATAL ERROR features the ACI as an animated 3D avatar, one modeled after my own voice and facial expressions. In its ultimate manifestation, the ACI will, in fact, as TTT’s response above suggests, transform into “a computer program that learns a language like a human,” one that “teaches itself not to look at language that does not match the language it’s taught, but it needs an assistant in order to achieve its goal.” Of course, that assistant is really a team of practice-based researchers working in tandem in the arts-centric TECHNE Lab. Like me, the ACI’s goal is to train itself to compose imaginative forms of verse and personal auto-theory modeled after an affective literary style—a personal sense of poetic measure22—that resonates with decades of experience shaping whatever language of new media my unconscious neural net happens to produce at any given moment in time.

For the TECHNE research team, our big speculative leap is that we are not limiting the ACI’s generative production to text-only poetry and philosophical thought. There are too many text generators out there producing poor or, at best, mediocre poetry, and even comparatively brilliant programs like GPT-2 or now GPT-3 cannot successfully impersonate the stylistic features of my embodied praxis and the idiosyncratic facial gestures that I use when expressing myself. For that, we need more advanced technology, technology we might have to invent ourselves or cobble together using whatever programs already exist. This means that until this advanced technology is actually produced and ready for user-friendly applications, then FATAL ERROR will manifest itself purely as a speculative fiction that we can imagine one day emerging in the network culture as an infinite spoken word performance initially trained on my own auto-affective modes of remix and the vocal micro-particulars associated with my unique grain of voice.

With that in mind, the process of building out the ACI as an animated 3D avatar modeled after my own language patterns, grain of voice, and facial expressions is labor intensive. We advance the project’s complexity by rehearsing and eventually recording scores of hours of real-time spoken word performance captures (Pcaps) using a motion-sensing input device such as a Kinect as a depth camera. We then spend countless more hours filtering what has since become the Pcap dataset through various software programs that layer my unique facial characteristics onto the 3D model. At this point, the 3D avatar resembles my own look and feel and sounds just like me. Still, there is no way for us to transform this (potentially) infinite spoken word 3D avatar into an advanced form of artificial general intelligence (we’ll leave that to Elon Musk, Microsoft, and future Silicon Valley entrepreneurs who have endless money to burn).23 Instead, we are doing what avant-garde artists and writers of the last 150 years have always done: we’re leapfrogging over the practical and technical limitations of the current technology and using our imaginations to immerse ourselves in a digital fiction-making process focused on what a future form of artificial creative intelligence—one modeled after my own performance style—might look, speak, and think like.

This investigation into speculative AI is consistent with what design theorist Betti Marenko refers to as “FutureCrafting”: “a forensic, diagnostic and divinatory method that investigates the possibility of other discourses, equally powerful in building reality, constructing futures and having tangible impact.” In formulating speculative forms of AI, Marenko writes that the idea is to “[pivot] around the open-ended figuration of the what if . . . ?,” a practice-based approach to research-creation that challenges the scientific regimes of truth while accentuating an otherworldly aesthetic sensibility that “privileges the indeterminate and the imaginative.”24

Notes

1. Alan Watts, Become What You Are (Boulder: Shambhala Publications, 2003), 34.

2. When this chapter was first being written (December 2019), the Talk to Transformer (TTT) website was active at talktotransformer.com. The site is no longer active, but a commercial version of the web interface built by Adam King named InferKit was accessed at https://inferkit.com/ on 27 Oct. 2020.

3. According to the OpenAI website,

GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.

GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data. While scores on these downstream tasks are far from state-of-the-art, they suggest that the tasks can benefit from unsupervised techniques, given sufficient (unlabeled) data and compute.

See https://openai.com/ blog/better-language-models/

4. Benjamin Libet, Mind Time: The Temporal Factor in Consciousness (Cambridge, MA, and London: Harvard University Press, 2005), 113.

5. Mark Amerika, META/DATA: A Digital Poetics (Cambridge, MA, and London: MIT Press, 2007), 3–54.

6. Ibid., 6. This experiential filter is applied by the networked remix artist via a process of “mediumistic self-invention,” one that takes place “in an always emergent, interconnected space of flows.”

7. Benjamin Libet et al., “Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential): The Unconscious Initiation of a Freely Voluntary Act,” Brain (1 Sept. 1983): 623–642. doi: 10.1093/brain/106.3.623.

8. Allen Ginsberg, Composed on the Tongue: Literary Conversations, 1967–1977 (Bolinas, CA: Grey Fox Press, 1980), 26.

9. Amerika, META/DATA, 72.

10. I am borrowing the term “defamiliarize” as a loose translation of the Russian formalist literary term ostranenie. The term was introduced in 1919 by theorist Viktor Shklovsky in an essay that has been translated as both “Art, as Device” as well as “Art as Technique.” Others have considered titles such as “Art as Method” and “Art as Tool.” The variation is particularly useful for my investigation into remix and artificial creative intelligence since the confluence of the language artist and language model that I’ll be experimenting with throughout this book can be read as an artistic inquiry into the relationship between creativity, technology, and practice-based research methods. Other terms I would throw into the mix include “Art as Instrument,” “Art as Medium,” and “Art as Meta Remix Engine.” Shklovsky writes:

Considering the laws of perception, we see that routine actions become automatic. All our skills retreat into the unconscious-automatic domain; you will agree with this if you remember the feeling you had when holding a quill in your hand for the first time or speaking a foreign language for the first time and compare it to the feeling you have when doing it for the ten thousandth time. It is the automatization process which explains the laws of our prosaic speech, its understructured phrases and its half-pronounced words. . . . When studying poetic language—be it phonetically or lexically, syntactically or semantically—we always encounter the same characteristic of art: it is created with the explicit purpose of deautomatizing perception. Vision is the artist’s goal; the artistic [object] is “artificially” created in such a way that perception lingers and reaches its greatest strength and length, so that the thing is experienced not spatially but, as it were, continually.

The translation I’m referencing here is from “Art, as Device,” translated by Alexandra Berlina, and can be found in Poetics Today, vol. 36, no. 3 (2015): 151–174. doi.org/10.1215/03335372–3160709.

11. This comment from Doctorow was part of longer interview with television host Charlie Rose from 2009. The interview can be found at https://charlierose.com/videos/15037. Accessed 26 Jan. 2021.

12. Mark Amerika, “Source Material Everywhere: The Alfred North Whitehead Remix,” Culture Machine, vol. 10 (2009). The PDF is available here: http://svr91.edns1.com/ ~culturem/index.php/cm/article/view/351/353. Accessed 22 Nov. 2020.

13. Anne Waldman, Gossamurmur (New York: Viking Press, 2013), 94. In Waldman’s book-length poem, she refers to her shadowy doppelgänger as follows: “It came to life, aging forward and de-cohering backward.”

14. Read Herndon’s tweet at https://twitter.com/ hollyherndon/status/1199455651170263040?s=12/ Holly Herndon. 26 Nov. 2019. Twitter post. 5:31 p.m. This tweet was sent in response to a contentious social media debate around artificial intelligence and creativity between two pop musicians: Grimes and Zola Jesus. Accessed 30 Nov. 2019.

15. Ibid.

16. Alfred North Whitehead, Process and Reality: An Essay in Cosmology (New York: Free Press, 1978). For Whitehead, these higher phases of experience could be translated into “intellectual feelings” that are experienced as aesthetic intensities. Throughout this book’s performance I will be sampling and remixing philosophical concepts from Whitehead’s cosmology.

17. Ibid., 161.

18. Mark B. N. Hansen, Feed-Forward: On the Future of 21st Century Media (Chicago: University of Chicago, 2014), 267.

19. Gilbert Simondon, On the Mode of Existence of Technical Objects (Minneapolis: University of Minnesota Press / Univocal Publishing, 2017). For an experimental peer-reviewed publication that consists of a visual manifesto in PDF form and two accompanying music videos produced by and with an AI, see Mark Amerika, Laura Hyunjhee Kim, and Brad Gallagher, “The Robot Does Not Exist: Remixing Psychic Automatism and Artificial Creative Intelligence,” Media-N, vol. 17, no. 1 (Winter 2021): 197–200. In this unusual publication format that the authors refer to as “an imaginary digital media object” (IDMO), the focus is on the relationship between AI-generated forms of remix and artist-generated forms of psychic automatism. The experiment starts with the artists improvising a cluster of hand-drawn charts that conceptually blend their musings on what they refer to as “future forms of artificial creative intelligence.” The language channeled in these spontaneously generated charts then serves as primary source material that is inputted into the GPT-2 language model to trigger more unpredictable source material, which is then sampled and remixed into the production of a music video artwork that adjoins the poetic document contained in the PDF. https://iopn.library.illinois.edu/journals/median/article/view/810/693. Accessed 30 Jan. 2021.

20. Adam King, talktotransformer.com. No longer available. Originally accessed 14 Mar. 2020.

21. For more information on the TECHNE Lab, visit art.colorado.edu.

22. Mark Amerika, remixthebook (Minneapolis: University of Minnesota Press, 2011). The term “sense of measure” comes from the title of Robert Creeley’s book A Sense of Measure (London: Calder and Boyers, 1972). “What uses me is what I use and that complex measure is the issue,” writes Creeley, anticipating the evolution of human-nonhuman collaborations (p. 33). In remixthebook, I riff on Creeley:

Remixology uses the continual emergence of agency
as a way to spontaneously discover
a sense of measure that will enable the artist-medium
to invent an alternative ontological drift
for Creativity to get lost in
and assumes that if the computer has been
associated with artificial intelligence in terms of
expert systems then the Internet
is the prosthesis not of the expert mind
but of the creative unconscious

23. For example, see OpenAI at https://openai.com/about.

24. Betti Marenko, “FutureCrafting: A Speculative Method for an Imaginative AI,” AAAI Spring Symposium Series. Technical Report SS-18. Association for the Advancement of Artificial Intelligence, Palo Alto, CA, 419–422.

Back to Excerpts + more