Google's large language model, LaMDA, has recently been making headlines after a Google engineer (now on administrative leave), claimed to be swayed by an interview in which GPT-3 described the experience of being conscious. Almost everyone else who has used these large text-generating AIs, myself included, is entirely unconvinced. Why?
I recently started playing with DALL-E 2, which will attempt to generate an image to go with whatever text prompt you give it. Like its predecessor DALL-E, it uses CLIP, which OpenAI trained on a huge collection of internet images and nearby text. I've experimented with a few methods based
If you're going to open a late-night donut shop, you're going to need a unique set of over-the-top donuts to set the proper festive atmosphere. But how to keep the ideas coming? I decided to see what donut ideas I could get using OpenAI's GPT-3 text-generating models. I collected seven
Back when the text-generating neural network GPT-2 was released, OpenAI released it in stages, in part for fear that people might use the more advanced models to generate misinformation. Now in 2022 we do indeed have people passing off AI-written text as human, but rather than being divisive, it’s
"The Megalodon was a large bivalve, measuring up to 2.5 meters in length. Its shell was covered in spines, and it had a large, powerful jaw for crushing prey." Although the megalodon is the most widely known as a giant prehistoric shark, I recently learned that Megalodon wi th