Member-only story
You may or may not have heard of Stable Diffusion… it’s an AI algorithm that attempts to generate images based on a written prompt given to it by a person. The AI model is trained with small square images and image keywords and meta data which it uses to try and interpret the text prompt provided and generate a unique image of what it’s been asked for.
I’ve been experimenting with it out of curiosity and due to my interest in art and technology and it’s been rather interesting, and dare I say frustrating. Like it or not, this sort of thing will always be around and as technology advances it’s likely to improve.
Using stable diffusion you can type some words into a text field, wait a short while (or a very long while if running on a CPU) and see the results. Many websites have cropped up and made communities around it. Sites like Nightcafe, or even smartphone apps like StarryAI.
If you do to one of these sites and type in a simple prompt like
“a cute grey cat”

It’s a pretty simple image, but it was a simple prompt. Let’s expand it a bit and try
“a cute grey cat with yellow eyes wearing a blue bow tie”

From what I’ve experienced so far with stable diffusion, it can be very difficult to get it to understand aspects of the text prompt. It often seems to fail if there is more than one concept or subject provided in the prompt.
It can also accept types of art such as oil painting or concept art or 3d rendering. As well as cues to certain artists such as Anna Dittmann or Greg Rutkowski. Combining such things can give either great results or horrible results. It’s a bit hit and miss but if you keep a prompt to a clearly defined main subject and some but not too many style cues you can get some pretty interesting results.
Here is some things the stable diffusion system has generated in my experimentation so far.