Skip to main content

Whose Art is This? Training an AI to Make Fun Cat Images in My Art Style Using Open Art's PhotoBooth AI Workspace

Image by Picfinder, TET, and Photo Booth.

AI image generation is a rapidly developing field. Currently a debate is going on about AI Art image generators being trained on practicing, professional artist works without their permission, allowing almost anyone to create images in those artist's styles without compensating the artists in any way at all. Not to mention, that most AI image generator output is generally defined, by the U.S. Copyright Office, Library of Congress, as in the 'public domain', able to be used commercially by anyone.

Technically that means if I try to sell an AI generated image as prints, anyone else can come along and take the exact same image and sell it as prints too. Legally it's very much a developing situation alongside the development of AI tools themselves, with online AI generators, such a Mage.space recently announcing a rewards program for AI model creators (an AI model is a general AI additionally trained on specific data sets, for example, many images from the same artist, to specifically generate new images in that artist's style).

Which gets me to the point of this article. As an artist myself, what excites me most is the possibility of creating an AI model of my own art style so I can generate my own AI images almost instantly in my style. I think that would be an incredibly useful tool. Specifically in the case of producing new art in a style I no longer produce actual physical artworks in any more.

Training an AI to Paint #TETCats# Style

The 30 Cat Artwork Images I trained my AI Model with.
Fig 1. The 30 Cat Artwork
Images I trained my
AI Model with.
In the early 2000's I painted and sold quite a number of stylized, cartoonish cat, acrylic paintings. For a while they earned me a semi regular income, and several commissions to paint people's pet cats in my style. I haven't painted a single cat artwork since 2012.

My cat artworks were also relatively successful sold as prints on RedBubble. This got me thinking, maybe there's an opportunity here to create new cat artworks in my style with AI to sell as prints.

I set about looking for an AI text to image tool that would allow me to train a model on my style. Eventually settling on Open Art's Photo Booth as an easy beginner's option. So easy, you could do this too. You can train it on any images not just art (train it on images of you so you can produce profile images of yourself in any art style, or doing anything you can imagine).

Training the AI

PhotoBooth uses a base model of Stable Diffusion 1.5 and allows you to create AI models trained on any set of 30-100 images depending on what pricing plan you choose. 

Since this was my first time I went with the basic package with personalized photos for USD$10.00. This allowed me to train a single model with 30 images and gave me 400 credits (basically 400 image generations). 

Open Art will keep your model for 30 days after which you can extend the time with a subscription. You can also download the model and use it with a local install of Stable Diffusion if you're technically minded and have a computer able to run it.

When you first set up your model you have the choice of four base models trained toward males, females, pets, and other. I figured pets would be better for me rather than other since my training images are stylized cats.

For my 30 images I chose a set of cat paintings from 2004-2007. Click the link to see the Flickr Album of the full images. Photo Booth requires you to crop all your images into a square. Fortunately you can do this quickly after uploading. I tried to choose the most important area of each image.

It takes about 30 minutes to an hour for your model to be ready. From there the system works like most other browser based AI text to image generators. To apply your model to the prompt you simply include the tag you set up, which in my case was '#TETCats# style'.

You also have various sliders you can adjust, which I'll get into as I go through my results.

The First Preset Generations

Once your model is ready to go you'll see a whole bunch of free, preset generations created with prompts that might be typical of the base model you chose - in my case 'Pets'.

A selection of free, preset AI Image generations of TETCats.
Fig 2. All of these images are a 'best' example from each of the ten preset prompt generations (one category, 'pets cooking', didn't produce anything remotely recognizable as a cat in my style so I didn't include it). For each image there are seven other free generations. Promisingly almost all generations featured my stylized cat head, markings and colors.

While none of my preset generations were usable, as they were not typically based on activities I depict my cats performing, they were really fun to see, sometimes creepy, but mostly showed promise that the training understood my style to a degree.

Finding the Best Prompt (Highlights)

Obviously with 400 credits I can't show you every single image generation. Instead I'll run you through my thinking on my prompts and show a few of the best (and probably a few of the WTF is that images as well). 

Early Prompts

The most common cat image I've painted is a ginger cat, crazily leaping from between flowers, trying to catch a butterfly. That was my first prompt (including the '#tetcats# style' tag that tells the AI to use your model): "A cat #tetcats# style, leaping from a field of flowers, trying to catch a butterfly."

For my early generations I went with the default AI settings and, for the whole time, I stuck with the standard 4 image generations for each prompt. If I got really promising results I clicked the option to generate 8 more images for the same prompt and settings.

Generally I didn't feel these generations captured the painterly texture of my work or the personality of the cats. To counter this I tried adding words to my prompt such as 'art', 'painting', 'warm colors', 'joyful, vivid color', and 'comical'. I even tried adding 'Van Gogh' to my prompt to see if I could literally get visible brush strokes into the image.

A sample of nine early prompt generations of a Cat leaping from flowers.
Fig 3. The top left image is the best from my first four image generations before I started modifying the prompt with additional words. Spot the two Van Gogh image prompts!

You Want Four Legs and ONE Tail?

You may have noticed in my early images some fairly deformed (or is that 'surreal') cats. I can assure you that cats with more multiple limbs and tails than a standard cat should have was very much the norm. As were deformed limbs, multiple eyes, and mouths, floating heads and more.

I figured maybe my initial prompt was asking a bit too much so I went back to basics with this prompt: "A cat with four legs, one tail #tetcats# style"

This prompt, I swear, made the AI look at me like I was some kind of crazy person as it proceeded to add more legs and tails to almost every image than any single (and sometimes two cats) could ever need, these generations were the most like something I would actually paint (if I was into painting mutilated cats).

Cat generations with more tails and legs than I asked for.
Fig 4. If you compare these to my original artworks that the model is trained on, I feel these generations look the most like my actual art style however the cats look like something Dali would've painted!

AI. This is What a Cat Looks Like!

My next idea was to use Image to Image generation. This involves uploading a reference image and then telling the AI how much you want that image to influence the output. In Photo Booth you get a 'Strength' slider where 0 equals no influence, and 1 equals just hand me back the reference image as output.

Image generations using a reference image.
Fig 5. The image in the top left is my reference image of a kitten on a white background. The other five images are generations based upon it with varying Strength and CFG settings.

At the same time I made an effort to learn more about other settings I could adjust like the CFG scale. This is kind of similar to the strength setting on the reference image but controls how much the AI will try to stick to your prompt. A value of 2 means 'what prompt?' A value of 30 means 'I'll do my best to incorporate as much of this prompt as I can but I hate you for micro managing my creativity - it'll look awful!'. I settled on a value of 15 (the default is 7) which is considered good for detailed prompts.

I also increased the number of steps the AI would take to reach a final image from the default of 25 to 50. 25 is usually fine but 50 can often result in a more refined image. More than 50 is considered overkill and of minimal benefit.

Photo Booth also has three different sampler models to choose from, DPM Solver++, ddim, and Euler A. Euler A results in a softer image, great for those fantasy style images, the other two, even Photo Booth says don't really have any significant difference. For whatever reason I felt ddim was better for the generations I was getting.

I feel the reference image I chose initially wasn't a good choice. While I was getting a very coherent interpretation of a cat, the white background was stifling my prompt when I again added flowers and butterflies into the mix.

Changing the Reference and the Prompt... Wow!

The photo you choose as reference can really have a dramatic effect on the output. I found an image that was of a real cat but more in the position I'd paint, if I was painting a cat leaping after a butterfly. As my reference image was more representative of this I changed my prompt to: "A painting of a cat, with a large head, leaping in the air, chasing a butterfly, in a field of flowers. Warm colors.  #tetcats# style" (I did try minor variations of this prompt but the results weren't radically different).

I asked for a larger head because my cats tend to have larger heads and those in the previous generations were too small by comparison.

More cat generations from a more dynamically posed reference image.
Fig 6. Again my updated reference image is in the top left. I changed my prompt back to a cat leaping from flowers chasing a butterfly (or some variations of).

At this point I'd also discovered a good example of a negative prompt - which is an optional additional prompt you can enter that lists things you don't want to see in your image. I used this negative prompt on every generation going forward: "duplicate, deformed, ugly, mutilated, disfigured, text, extra limbs, face cut, head cut, extra legs, poorly drawn face, mutation, cropped head, malformed limbs, mutated paws, long neck".

What I liked about these images was the flowers that were definitely in the style, but the cats were moving away from my style, and all the images looked too smooth. More like digital art than paintings.

I kept my new reference image but switched up my prompt to: "A happy tabby cat  #tetcats# style  leaping from flowers at a passing butterfly. Rough painting"

I will admit that I really liked some of the generated images but they looked nothing like my cats.

Slightly different prompt, same reference image, six more generations.
Fig 7. While I really liked some of these generations they just weren't that close to my style at all.

Okay, That's Not a TET Style Cat, This is a TET Style Cat

At this point I decided to abandon using a reference image in the hope that my negative prompt would be enough to pull the AI back from generating deformed 'Dali' cats.

Doing this saw the return of cats that actually had some resemblance to my style of cats but the images still looked too smooth. Too much like digital art. Plus my negative prompt didn't always suppress too many limbs and duplicate cats.

Six more generations of TETCats. This time without a reference photo but with a negative prompt.
Fig 8. The image in the top left corner I would totally consider selling as an art print as is. Without the reference image you can see the cats and the flowers are more in my style again. I don't know what's going on with the cat in the lower right corner!

Whose Art is This?

By this point I had burned through more than half of my 400 credits and still hadn't quite seen anything that really looked like something I'd question as to whether it was me or an AI that had created it.

You can see in the previous set of images, the bottom three do have a little more of a painterly style as a result of adding the words 'impasto painting' into my prompt. While the images are clearly influenced by my art style I thought I'd see what would happen if I added the art style of some more famous artists, with similar styles to me, that I know Stable Diffusion will recognize.

If you want to try adding artist names to your prompts there's an entire list of all 1833 represented artists with AI output comparisons in Stable Diffusion you can search.

Nine generations of TETCats style mixed with known Stable Diffusion artists with similar style.
Fig 9. All of these images are mixed with artist styles that Stable Diffusion recognizes. Each image also has the words 'impasto painting' in the prompt with exception to the bottom middle image which I really only generated for fun because it's a popular generation style. Can you guess the artists? Answers at the end of the article.

No, Really... Whose Art is This?

By this point I'd almost burned through my 400 credits. While I don't think I achieved any one image that I could say looks exactly like something I could have painted I did arrive at a look and style that is a blend of my style and another artist - as well as the impasto painting technique.

The Race - Original Art by Josephine Wall
Fig 10. The Race - Original Art by Josephine Wall
Josephine Wall
 is a U.K. based fantasy artist. Her work is highly detailed, painterly and often brightly colored. She leans toward more realistically proportioned human and animal subjects within her art and, doesn't appear to have painted too many cats.

I feel that last part is quite important as my TETCats style seems to survive really well when blended with her style. I like the blend in style so much I used up all my remaining credits on 76 more generations.

My final prompt: "A happy tabby cat  #tetcats# style,  leaping from flowers, attempting to catch a passing butterfly. Josephine Wall, impasto painting. Minimal warm bright background."

15 generations of TETCats mixed with artist Josephine Wall's style.
Fig 11. A selection of TETCats images that include Josephine Wall's name within the prompt. It's fairly clear the cats and flowers are mostly influenced by my own style. Josephine's work likely adds a higher level of detail and definitely has influence over the background coloring.

Overall I really like the combination of styles. While not every generation produces a great image - particularly when it comes to limb counts - the cats and flowers are clearly influenced by my art.

Josephine's style adds a lot more texture detail and shading than I would do, along with the choice of coloring used in the backgrounds. I tend to stick to one gradient of one or two colors.

Then there's whatever influence the words 'impasto painting' add into the mix since Josephine doesn't really have an impasto style.

The question is, genuinely, whose art is this?

An Original TET Cats Painting. AI TETCats Generated Image. Original Josephine Wall Painting.
Fig 12. Left: Original TET Cats Art Acrylic Painting. Middle: AI TETCats/Josephine Wall Generation. Right: Original Josephine Wall Painting, The Race.

Not only are these final images influenced by my own original art, I also curated the subject matter and the influences that would affect that subject matter, specifically referencing both the impasto painting technique and Josephine Wall's work.

Could I have arrived at a similar artwork through traditional means (actually researching Josephine's work and drawing and painting the final image)? Possibly but highly unlikely. Particularly because I haven't painted cats in years.

Plus there's no reason for me to have my work influenced by Josephine's if I'm actually painting the work myself, since I added her work's influence to try and emulate a more painterly style similar to my own.

Are My Images a Breach of Josephine's Copyright?

This is definitely not legal advice but I have studied copyright law enough to relatively confidently say 'no'. Whether Josephine consented to having her art included in Stable Diffusion's data set is another matter entirely.

On the face of comparing my AI Generations to her original art I'd say anyone would be hard pressed to say my images are a rip off of her style (you can't actually copyright a style either) or any of her specific artworks (artwork is definitely protected by copyright).

While there's no specific amount of an artwork you can 'copy' without being in breach of copyright it is up to a court to decide whether you've significantly 'transformed' a work to not be a 'copy' of an artwork. i.e. if someone looked at your work, would they immediately connect it back to the artwork you copied?

I think even Josephine Wall would be hard pressed to look at any of my AI generations influenced by her art and even recognize her work was an influence without being told.

It's All Public Domain Anyway So Who Cares?

USA Copyright Law currently defines AI Image generations as Public Domain. This could change in the future depending on how image generation evolves and whether artists receive any compensation whenever their name is used in a prompt. Time will tell - and it's not that simple.

I could, theoretically, train an AI model on every aspect of my art and only generate images using that data. That's all my intellectual property. If I want to feed my art through an AI filter, is it any less my property when it comes out the other end? Particularly if I own the AI model as well? Have I just made my IP public domain just because I used a computerized assistant to create the images?

How is that different from a professional artist hiring assistants to create the actual art? How is it different from a company owning the copyright on work they hired an artist to design and create for them?

It's Money on the Table for Artists

As much as artists are crying foul on their work being included in AI data sets without their permission, I think they'll soon by crying foul on all AI generations being considered public domain. AI image generation is too good a tool for modern artists to ignore altogether.

Being able to generate 400 high quality, all unique images, in a specific style in less than an hour, that's money on the table for artists everywhere. Whether it's saving time on brainstorming ideas, producing final artwork for selling prints, or something else? It's a massive time saver.

You're going to want to say, with confidence, this is your art, made with the assistance of AI. Without you those AI image generations you curated simply would not exist.


---o ---o--- o---


Answers to Fig 9 image artists from left to right, top to bottom: 1. Leonid Afremov | 2. Paul Cézanne | 3. Berthe Morisot | 4. Thomas Gainsborough | 5. Lovis Corinth | 6. Andrew Atroshenko | 7. Gaston Bussière | 8. Pixar | 9. Josephine Wall


Comments

Popular posts from this blog

Inochi2D - Free Open Source 2D VTuber Avatar Rigging and Puppeteering Software (Part 1)

Inochi2D Creator - Free Open Source VTuber Software. If you've been looking for a way to live perform as a 2D cartoon avatar on camera, whether it be for a live stream or for pre-recorded content like educational videos, then VTuber software is a low cost (or even no cost) option worth looking into. In my previous post, How to Become a VTuber - 2D and 3D Software for Creating and Controlling Your Avatar , I took a brief look at the relatively new but completely free and open source Inochi2D  which I thought showed great potential for my own needs of creating a live performance character rig for my own TET Avatar that I use for all my promotional materials. While it is possible to live perform my character using Cartoon Animator itself, Reallusion's MotionLive2D capture system isn't great - with lip sync in particular. More importantly though, I can't exactly teach people how to use Cartoon Animator if I'm using Cartoon Animator to control my Avatar. What is Inochi2D

Dollars Mocap: Full Body Webcam Motion Capture (Including Hands and Fingers) For iClone and Cartoon Animator

Even though I should be further away from the camera Dollars Mocap MONO still does a good job of  tracking my arms, hands and fingers. Ever since I wrote my series on becoming a VTuber , discovering it was possible to do full body motion capture, including hands and fingers, with just software and a webcam, I've been on the look out for any motion capture software that can bring that functionality to Cartoon Animator. Dollars Mocap is a low cost motion capture application with a free trial that I learned about through the YouTube Channel Digital Puppets  and their test video . It can record full body, upper body, arms and hands, and facial mocap from a live video source or pre-recorded video. Investigating further, I discovered not only does Dollars Mocap have a free iClone7, iClone8 character profile file download (look for it at the bottom of the main program download page), so you can use the saved motions with iClone8, they've also got a demo video for how to convert your

Prome AI Sketch Render Tool - Your Tradigital Clean Up and Colorist Artist for Character and Background Design

Random character head, Biro sketches drawn by TET (left). Render by PromeAI (right) using Prome's Sketch Render tool set to 'Comon:Cartoon, Render Mode: Outline'. W hile I don't do New Year Resolutions, one of my plans for the year ahead is to do more of my own art. Specifically character design drawn in an actual, physical sketchbook.  To that end, I have been spending the last half hour of most days drawing a page or two of random biro sketches in my sketchbook and posting the pages to my Instagram account  (this link will take you to one of my posts). These sketches are mostly practicing my skills because I don't really draw regularly anymore. Here is a tip, if you do this kind of sketching, and push yourself to keep doing it, you will see many drawings that could be taken further, even if you don't have anything they're suited for just at the moment. Which is where my second favorite AI Image Tool (after Leonardo.ai )  PromeAI comes into play. PromeAI

Moho 14 Released - Still the Best 2D Animation Software for Indy Animators on a Budget

Moho 14 Released. Regular readers know I am a Reallusion, Cartoon Animator advocate through and through. Hands down I would recommend Cartoon Animator 5 first over Lost Marble's Moho 14 to anyone who is just starting in 2D animation, is a team of one, or just needs to animate as quickly as possible. However, feature for feature, Moho is, arguably, the best 2D animation software for the rest of us who can't justify a Toon Boom Harmony , or Adobe Creative Cloud subscription (and even with their applications Moho is very competitive on features). You can get started with Moho Debut for just USD$59.99 which is a cut down version of Moho Pro but it still has the most essential features needed for 2D animation. While Moho Pro is a whopping USD$399.99 (Cartoon Animator, which only has one version, is just USD$149.00) upgrades to new version numbers come down to a quarter of the price at USD$99.00. Even though Reallusion just released features like Motion Pilot Puppet Animation and

Wonder Unit Storyboarder - Free Storyboarding Software for People Who Can (or Can't) Draw

Wonder Unit Storyboarder.  As an independent and solo animator I'm always tempted to try and skip storyboarding my animated shorts because they're usually only single scene sketch comedy type jokes. As a result I have many unfinished projects that kind of petered out due to having no clear finishing line. Storyboarding your productions, no matter how small, gives you a step by step guide of every shot that needs to be completed (no planning shots as you animate). It also allows you to create an animatic that gives you a rough preview of the finished production. In short, you shouldn't skip storyboards as they, generally, increase the chance of the project being completed. Disclaimer - I'm Not a Fan of Storyboarder Upfront, Wonder Unit's Storyboarder  is not my preferred storyboarding software. However it's completely free, has a number of very compelling featu

Start Your 2D Animation Side Hustle - Sell Your Cartoon Animator Characters, Props, Scenes, and Motion Files in the Reallusion 2D/3D Marketplace

Have you thought about starting a side hustle selling your original Cartoon Animator assets in the Reallusion 2D/3D Marketplace ? In this article, the first in a series on selling in the marketplace, I'll give you an overview of what's involved, why you should give it some thought, and whether you can earn enough to quit your day job (or at least have a worthwhile side hustle). If you're an artist with any kind of drawing skills, and you're creating your own original characters, props, scenes, and even motion files for your Cartoon Animator projects, then setting up your own store in the Reallusion Marketplace should be a no brainer. You're making content already, it doesn't cost you anything to set up, and Reallusion only takes a 30% commission from each item sold. (If you think that's a lot, I'll address that further down). Don't be put off if you think your art skills aren't up to professional standards. There are plenty of artists with naïve

2D Animation Side Hustle - How to Package, Upload, Optimize, Price, and Promote Your Content in the Reallusion 2D Marketplace

T his is the final post in my four part, 2D Animation Side Hustle series on Selling in the Reallusion 2D Marketplace. I'll look at how to package up your content and upload it to the Marketplace, how to optimize your listings, price, and how to sell through your store backend as well as off site through social media and other channels. If you haven't read previous articles in the series click these links for Part 1 - Can You Make Real Money , Part 2 - Finding Niches , and Part 3 - What to Sell . Before getting started you'll need to register as a Content Developer in order to see the backend of your store on the Reallusion Marketplace . Packaging Content and Uploading to the Marketplace Since the release of Cartoon Animator 5 the software has a built in Package Manager that makes it easy to assemble and upload your Content to the Marketplace. Reallusion has a comprehensive, official video (embeded below) that quickly explains the entire process in less than seven minutes.