AI Weirdness: the strange side of machine learning

Tag: chatgpt

Total 13 Posts
Old-timey style illustrations mostly of birds in santa hats & garbled versions of verses from 12 days of christmas

Your illustrated guide to Christmas carols

Between the two of them, ChatGPT4 can generate the lyrics to Christmas carols, and DALL-E3 can illustrate them! Throw your old carol books away because this is the only guide you'll need. 12 Days of Christmas Rudolph the red-nosed reindeer (if you read out the tiny text beneath each of
Advent calendar style images including a sweater mug filled with hot chocolate and candles

AI Weirdness advent calendar 2023

It's 2023 and the combo of GPT-4/DALL-E3 can generate passable versions of the saccharine Christmas drawings in an advent calendar. They cannot, however, label them correctly. Also sometimes you get sweatermugs. This means the 2023 AI-generated advent calendar is happening! Full descriptions of every door in the calendar 1.
Drawing of a walrus with "imagine a tiny little walrus. you call them a 'snowbonk' and put them in the fridge so they chill"

Trolling chatbots with made-up memes

ChatGPT, Bard, GPT-4, and the like are often pitched as ways to retrieve information. The problem is they'll "retrieve" whatever you ask for, whether or not it exists. Tumblr user @indigofoxpaws sent me a few screenshots where they'd asked ChatGPT for an explanation of the nonexistent "Linoleum harvest" Tumblr meme,
Non-native speaker essays misclassified as AI-written 48% up to 75% of the time vs 0%-12% for native speakers

Don't use AI detectors for anything important

I've noted before that because AI detectors produce false positives, it's unethical to use them to detect cheating. Now there's a new study that shows it's even worse. Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased. This is
ChatGPT confidently losing at tic-tac-toe

Optimum tic-tac-toe

ChatGPT text can sound very knowledgeable until the topic is something you know well. Like tic-tac-toe. Once I heard that ChatGPT can play tic-tac-toe I played several games against it and it confidently lost every single one. Part of the problem seemed to be that it couldn't keep track of
ChatGPT describes code that draws a pink pony but actually produces a pink pig face

Chatbot, draw!

I'm interested in cases where it's obvious that chatbots are bluffing. For example, when Bard claims its ASCII unicorn art has clearly visible horn and legs but it looks like this: or when ChatGPT claims its ASCII art says "Lies" when it clearly says "Sip" or when ChatGPT claims its
Chatgpt: Here's the ASCII art of the word "lies" (generates block letters that clearly read "sip")

What does this say?

Large language models like ChatGPT, GPT-4, and Bard are trained to generate answers that merely sound correct, and perhaps nowhere is that more evident than when they rate their own ASCII art. I previously had them rate their ASCII drawings, but it's true that representational art can be subjective. ASCII
ChatGPT says "Sure, here's an ASCII art of a unicorn" and generates something that looks like a melting triangular person.

ASCII art by chatbot

I've finally found it: a use for chatGPT that I find genuinely entertaining. I enjoy its ASCII art. (huge thanks to mastodon user blackle mori for the inspiration) I think chatGPT's ASCII art is great. And so does chatGPT. Lest you think chatGPT (here, the March 14 2023 version) is
Transcript in which chatgpt generates (extremely vague) instructions for building something called the Torment Nexus

The AI Weirdness hack

A challenge of marketing internet text predictors like chatgpt, gpt-4, and Bard is that they can pretty much predict anything on the internet. This includes not just dialogues with helpful search engines or customer service bots, but also forum arguments, fiction, and more. One way compaies try to keep the
Bing chat dialog in which Janelle challenges it on a Battlestar Galactica AI Weirdness post it claims exists.

Search or fabrication?

I recently started experimenting with Bing's new ChatGPT-powered chat tab. This is the first thing I asked it for: I've put red boxes around the factual errors. What is notable is that these are not just slight typos or errors in context - those items never appeared anywhere on my
You've successfully subscribed to AI Weirdness
Great! Next, complete checkout for full access to AI Weirdness
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.