DALL-E3 generates candy hearts
I've experimented a couple of times with generating candy heart messages using various kinds of machine learning algorithms. Originally, short messages were just about all the original text-generating neural networks could handle. Now we've come back around to approximately the same performance, yet with orders of magnitude more computational resources consumed. (although I don't have to photoshop the messages onto candies any more, so that's nice) Here's DALL-E3 generating candy hearts:
My impression is the text here is operating not so much on the level of "here are plausible candy heart messages" so much as "here are some clusters of pixels that are associated with candy hearts". As with most AI-generated imagery, it's the most impressive at first glance, and then gets worse the longer you look.
I've noticed that the more text DALL-E3 tries to put in an image, the worse the readability of the text is - I'm fairly surprised at how legible most of the candy hearts above were. (Maybe it helps set expectations that the real-life candies are often garbled.) When I ask for fewer hearts, they end up crisper. But not necessarily improved in coherence.
But there's another possibility that amuses me. The search "candy hearts with messages" brings up images from past AI Weirdness candy heart experiments. It is likely that these were part of DALL-E3's training data, and they may have had an effect on the weirdness of generated hearts that I'm getting now.
When I ask for candy hearts with "quirky, AI-style messages", I get candy hearts that are (to me) indistinguishable in quality from the first grid.