One strange thing I've noticed about otherwise reasonably competent text-generating neural nets is how their lists tend to go off the rails. I noticed this first with GPT-2. But it turns out GPT-3 is no exception. Here's the largest model, GPT-3 DaVinci, finishing this list of ingredients you put in
Here's "Ice Cream Planet Swirl", as generated by Pixray. Full prompt: Ice Cream Planet Swirl #8bit #pixelart. Colors are chocolate, minty green, and cream.Pixray uses CLIP, which OpenAI trained on a bunch of internet photos and associated text. CLIP acts as a judge, telling Pixray how much its images
This bonus post is now unlocked for everyone! In the main post I experimented with Pixray Swirl, which I could use to build an image and then sort of zoom into it. It took me a while to get the hang of steering, and I never quite got precise enough
One of the fun things about working with a giant text-generating model with general internet training is that when it finishes responding to one of your prompts, it'll sometimes continue with a prompt of its own. (This can also be one of the NOT fun things, if its prompts veer
I'd been vaguely aware of pigeons until I read my friend Rosemary Mosco's book A Pocket Guide to Pigeon Watching. Now I'm in love. It started with the back cover, where there's a pigeon trying so hard to impress me with its puffed-up plumage. They're all such good pigeons. From
"What would it take to teach a machine to behave ethically?" A recent paper approached this question by collecting a dataset that they called "Commonsense Norm Bank", from sources like advice columns and internet forums, and then training a machine learning model to judge the morality of a given situation.