Skip to main content

Illustrating Dystopian Future Poems with Stable Diffusion and Dall.e 2 AI's - Comparing Directed with Non Directed Output

Her addiction was the feeling of scoring a bargain. Image by Hugging Face Stable Diffusion Demo based on a prompt by David Arandle (TET).
Her addiction was the feeling of
scoring a bargain.

Image by Hugging Face Stable Diffusion Demo
based on a prompt by David Arandle (TET).

Far from being fearful of AI image generators like Dall.e 2 and Stable Diffusion I feel a new skill set is emerging for artists who embrace the technology. Chiefly among those skills is the ability to write 'quality' text inputs, and the ability to curate the best outputs i.e. images, if they are being fed into a bigger project or art piece.

Case in point. One of my FB friends shared an article by Jesus Diaz, AI was made to turn David Bowie songs into surreal music videos, in which YouTuber aidontknow fed the lyrics of Bowie's Space Oddity into Midjourney AI and then curated the resulting images into a video clip for the song (embed below). 

While aidontknow says they used minimal changes to the lyrics, such as to clarify characters being spoken about in the song, from my experience you don't get quite that cohesive range of images without also suggesting an art style and give more detailed direction such as camera shots etc.

Regardless, this inspired me to revive one of a series of dystopian future poems I wrote between 2005 and 2006 dealing with the human condition, virtually reality consumerism, and AIs. The poem is titled, Rachel. The video of me reciting it, is actually the very first entry in this blog (because this blog was initially going to an art piece telling the story of those poems. If you read that post it's actually the start of a story rather than a blog post).

One at a time I entered each line from my ten line poem into both Dream Studio's Stable Diffusion AI and Dall.e 2's AI with no modification to the lines other than appending; An Oil painting the style of Blade Runner the movie. Wide angle lens to the end of each prompt. This was to give every image a unified look and, when I wrote the poems, I always imagined a Blade Runner style to the art. 

You can see this Blade Runner influence in the digital art  image I created for another of the poems (below) called, Stealing. This image is the first and only complete artwork I made at the time.

Stealing. One of nine dystopian future poems written by TET in 2005-2006.
Stealing. One of nine dystopian future poems written by TET in 2005-2006.
Art by TET

Incidentally you can read more about my whole concept, and read another of the poems, The Fabulous Machine, in my TET Life blog article, Virtual Reality Addiction Meets Online Shopping and Death! I digress.

Feeding my ten lines into both AI's, for each line I generated eight images with DreamStudio (which was free at the time) and four with Dall.e 2, which I only had enough free credit left to enter nine of my ten lines.

I curated the best images from both AI's into a video presentation that includes the poems words, plus YouTube library music and other sound effects. Hopefully it gives you a sense of the poem, its mood, and what it's trying to convey.

Most of the more polished, cleaner looking, images are by Dream AI, which tended to fixate on the neon lit darkness of the city depicted in Blade Runner. While the more painterly images are Dall.e 2's, which must feel the textured oil paint look is the definition of 'oil painting'.

At this point I thought I was going to finish the project but then I started to wonder, how would my video presentation look with more directed images that actually describe more of the type of image I had in mind for each line of the poem?

For example, for the first line of the poem I originally entered this prompt:

Rachael patched in a circuit wired to her brain. An Oil painting the style of Blade Runner the movie.

For my second video presentation I entered this prompt:

Rachael, sitting on her bed, wearing a VR headset wired to a computer in her cyberpunk style bedroom, patched in a circuit wired to her brain. An Oil painting the style of Blade Runner the movie. Wide angle lens.

As you can see, a lot more detailed and not the exact line verbatim. Below is a side by side comparison of what I feel is the best produced image for each prompt, both generated by Dream AI.

Side by side comparison of Dream AI's output with an unedited, direct interpretation of the first line of my poem used as a prompt on the left. On the right the prompt included more description, along the lines of what I had in mind for an image when I wrote the poem.
Side by side comparison of Dream AI's output with an unedited, direct interpretation
of the first line of my poem, used as a prompt on the left. On the right the prompt
included more description, along the lines of what I had in mind for an image,
when I wrote the poem. 

It's entirely subjective on which image is a better interpretation of the first line. Especially as what I envisioned in my head is not necessarily the same vision anyone would imagine, reading my poem for the first time, because no one else has all the additional context I do.

Below is my updated video presentation, using the best images generated with my more detailed input prompts (same music and sound effects just to save time).

Note that I didn't use Dall.e 2 this time because I didn't want to pay for more credit. I also didn't use Dream's AI directly either for similar credit issues (I'm not like Rachael, spending all my money on zeroes and ones for fun). Instead I used Hugging Faces demo version of Stable Diffusion which is slower but essentially the same AI with a few less settings, and completely free at the time of writing this. (Insert rant here about all these AI's putting up pay walls rather than going the free, ad supported route).

Anyway, what do you think of my second video presentation?

Creating works like this really does show that the human element of generating quality prompts is very much a skill to be learned, as is the curation of the output. Not every prompt produces the results you are hoping for. Particularly if the AI fixates on the wrong part of a prompt as the main subject to highlight.

Several times in my second presentation I completely scrapped detailed prompts that I thought should get good results but were just producing garbage (never more is the computing quote "garbage in, garbage out" personified than with text to image AI generators).

As I said in my previous musing on AI's, Is Your Next Design or Writing Partner an AI?, these algorithms do not actually think for themselves. Even if you were to use a writing AI to randomly generate prompts for an image AI neither would have any concept of the output as an abstract concept, or how that concept might relate to other prompts. The human element is still key in getting the best images.

I'm tempted to try this with all my poems in this series. It seems very appropriate to use AI to generate images for poems about AI and how humans are finding more ways to hook themselves into 'the machine' for longer and longer periods at a time (not to mention the rise of corporate money machines, passively draining your bank account - did I mention all the text to image AI's being put behind paywalls yet?).

One of my original concept sketches for Stealing drawn alongside the poem in 2005.
One of my original concept sketches for Stealing
drawn alongside the poem in 2005.
I guess the ultimate experiment would be for me to execute the project in the way I initially envisioned back in 2005, with a combination of digital collage images mixed with my own hand drawn sketches. I'd also need to write the accompanying narration that links the poems together. Which I think was a kind of future noir detective story. Not sure because my first blog post in this blog is the only part of that I actually wrote.

The question is, now that I've been influenced by AI text to image generators, could I even produce what I had in mind back in 2005?  


Comments

Popular posts from this blog

Eight 2D Animation Apps For Your Phone or Tablet Mobile Device

M obile productivity apps have become so capable that they can be great alternatives to their PC/MAC equivalents or serve as great tools in their own right when you're away from your desk. While some apps simply mimic their desktop counterparts, others offer well thought out, touch-friendly interfaces that are easier and more fun to use. Every so often I check out what's available for 2D animation for Android devices, since that's what I use, that can complement my workflow with Reallusion's Cartoon Animator 5. Some may be available for Apple devices as well. Below I've listed six free (F) apps (with optional paid (P) upgrades) on the Google Play Store that you might want to explore. Some are just fun apps on their own while others may be useful as part of your workflow on bigger animation projects. Not all are exclusively animation apps and could be used on any production. JotterPad (F/P) The name JotterPad makes this sound like a notepad application but it's ...

Skate Monkey (Part 1) - My first Crazy Talk Animator Multi-Dimensional Character

Continuing on with my progress of  learning Crazy Talk Animator 2 I've begun work on creating my first Multi-Dimensional character. As you may have guessed it's my Skate Monkey character that I briefly attempted to turn into a CTA1 character quite some time back (See this post for the video ). A CTA2 'Multi-Dimensional' character is simply a character that consists of 10 different view angles that form a 360 degree view. This character is attached to a bone skeleton that exists in three dimensional space. The software then calculates which images from your 10 different view angles are needed to execute whatever motion you add. In the image below you can see my skate monkey character drawn at angle zero in the Serif DrawPlus template provided by Reallusion. The other nine view angles are the CTA2 dummy character which I will progressively replace with my monkey as I draw more views. A CTA2 Multi-Dimensional character has 10 view angles. In the short video be...

Learning Moho Pro 12 (Anime Studio Pro) - Part 2, Debut Videos 1-15

Alvin Owl: Bones Rig. In part 2 of my journal blog documenting my progress with learning Moho Pro 12 I dive head first into the first batch of video tutorials covering the basics of Anime Studio Debut.... wait, what? In Part 1 I described how the video tutorials I purchased with Moho Pro 12 were in no particular order and made for a confusing mess. As well, most of the video tutorials are really for Anime Studio 11, which has all the same basic features minus whatever new features have been added.

Inochi2D - Free Open Source 2D VTuber Avatar Rigging and Puppeteering Software (Part 1)

Inochi2D Creator - Free Open Source VTuber Software. If you've been looking for a way to live perform as a 2D cartoon avatar on camera, whether it be for a live stream or for pre-recorded content like educational videos, then VTuber software is a low cost (or even no cost) option worth looking into. In my previous post, How to Become a VTuber - 2D and 3D Software for Creating and Controlling Your Avatar , I took a brief look at the relatively new but completely free and open source Inochi2D  which I thought showed great potential for my own needs of creating a live performance character rig for my own TET Avatar that I use for all my promotional materials. While it is possible to live perform my character using Cartoon Animator itself, Reallusion's MotionLive2D capture system isn't great - with lip sync in particular. More importantly though, I can't exactly teach people how to use Cartoon Animator if I'm using Cartoon Animator to control my Avatar. What is Inochi2D...

Plastic Animation Paper - Free 2D Animation Software

I discovered Plastic Animation Paper (PAP) Pro 4.0 for Windows quite some time ago and even had it installed on my computer for well over a year - unused. The full pro version of the software has been given away for free, no strings attached but with no tech support, since July of 2010. Not to be sneezed at since prior to that date this version sold for 695 Euro (roughly US$900.00). When I discovered it I was still finding my way back into my love for animation and the bug to animate my characters more traditionally via classical, hand drawn 2D animation techniques had yet to take hold. I didn't really understand what PAP did or why you couldn't make complete, finished animations with it. After finding some really great, very affordable, digital storyboarding software , PAP is the next tool in your digital production workflow for those of you on a budget creating traditional 2D animation. Depending upon how finished your storyboard panels are you could even impor...

Featured GoAnimator: Enjoyinglifeinseoul (ELIS) - Witches of Misery!

EnjoyingLifeInSeoul GoAnimator enjoyinglifeinseoul is the December 2013 winner of my GoAnimate, Get Featured in TET's Blog contest . Enjoyinglifeinseoul has had his account with GoAnimate since March of 2011 and in that time has amassed 3346 followers and published 49 animations. This is his second win of the contest. Read his first winning post here . Enjoyinglifeinseoul chose to feature his animation , saying... Well, it took me some time to decide which of my animations to have featured. I considered some of my older ones like “Prince of Persia” or “Turtle Shell! Turtle Shell!” and the latest ones like “Demon Fire!” and “Witches of Misery!” In the end I went with “Witches of Misery” because most of the challenges and special techniques I had used in the others were covered the last time I was featured here.   Some fresh problems popped up this time and I thought that the solutions I came up with might help some of your readers. The other reason I choose this one...

The Ultimate Independent Animator's App and Resource List - Animation and Video Life

Image created with Cartoon Animator 4. Being an independent animator is not like a studio animation job. There's so much more to do that is indirectly related to the actual task of animating. Over the years I've sought out many apps, tools, and services that can help me achieve that one single task, expressing myself through animation. Below is my Ultimate Independent Animator's Resource List for 2024 (last updated Oct 2024). It started out as a list of free or low cost apps that could help you in every stage of producing either 2D or 3D animation, and then just kind of grew from there. You may not have been looking for a Time Management App as much as you needed something to get you started in 3D animation but when those commissioned projects start coming in you'll have a head start on maximizing your time. All the apps and services on this list had to meet two main criteria: They had to be useful and relevant to an Indy Animator/artist. The base app/se...