Few could have predicted that the must-have toy of 1998 would be an owl-like bilingual hamster doll with infrared sensors, or that in 1975 kids would be begging their parents for a toy that is literally a single rock in a cardboard box. But could AI have predicted it? Could
When I was a kid I looked forward to opening advent calendar doors in December, although the pictures behind the doors were pretty forgettable. A bell. A snowflake. If you were lucky, a squirrel. So I thought I'd see if I can generate something a bit more interesting, with the
I built an advent calendar by using GPT-3 to generate descriptions and Pixray to illustrate them! But some of the descriptions were too long for the calendar doors, or else Pixray seemed to really struggle with them. I've collected some of my favorites here!
One strange thing I've noticed about otherwise reasonably competent text-generating neural nets is how their lists tend to go off the rails. I noticed this first with GPT-2. But it turns out GPT-3 is no exception. Here's the largest model, GPT-3 DaVinci, finishing this list of ingredients you put in
Here's "Ice Cream Planet Swirl", as generated by Pixray. Full prompt: Ice Cream Planet Swirl #8bit #pixelart. Colors are chocolate, minty green, and cream.Pixray uses CLIP, which OpenAI trained on a bunch of internet photos and associated text. CLIP acts as a judge, telling Pixray how much its images
This bonus post is now unlocked for everyone! In the main post I experimented with Pixray Swirl, which I could use to build an image and then sort of zoom into it. It took me a while to get the hang of steering, and I never quite got precise enough
One of the fun things about working with a giant text-generating model with general internet training is that when it finishes responding to one of your prompts, it'll sometimes continue with a prompt of its own. (This can also be one of the NOT fun things, if its prompts veer