Recently I've been experimenting with DALL-E 2, one of the models that uses CLIP to generate images from my text descriptions. It was trained on internet text and images, so there's a lot it can do, and a lot of ways it can remix the stuff