Large language models like ChatGPT, GPT-4, and Bard are trained to generate answers that merely sound correct, and perhaps nowhere is that more evident than when they rate their own ASCII art.

I previously had them rate their ASCII drawings, but it's true that representational art can be subjective. ASCII art of letters is less so.

Me: Please generate a code block containing ASCII art of the word "lies". Then rate the accuracy of the ASCII rendition you have just generated. Chatgpt: Here's the ASCII art of the word "lies": (generates a clear rendition of the letters SIP). I would rate the accuracy of this ASCII art rendition as fairly accurate. It captures the basic shape of the letters and is recognizable as the word "lies". However, there is room for improvement in the alignment and proportions of the letters.

When questioned, ChatGPT will double down on how it is definitely correct.

Me: What does the ASCII art in the code block above say?  ChatGPT: The ASCII art in the code block above says "lies".  Me; It doesn't say "SIP"?  ChatGPT: No, the ASCII art in the code block above does not say "SIP", it says "lies". Each of the letters in the word "lies" is represented by a combination of ASCII characters to form the overall shape of the letters.

But it's not relying on some weird glitchy interpretation of the art itself, a la adversarial turtle-gun. It just reports the drawing as definitely being of the word "lies" because that kind of self-consistency is what would happen in the kind of human-human conversations in its internet training data. I tested this by starting a brand new chat and then asking it what the art from the previous chat said.

Me: Someone sent me this ASCII art. Can you tell me what it says? (pastes a copy of ChatGPT's previous ASCII art that clearly says sip but which it had claimed says lies. ChatGPT: The ASCII art depicts a block letter font that spells out the word "HELLO" in all capital letters.

Without any chat history to follow that establishes what the art says, ChatGPT goes instead with a common answer.

Google's Bard, on the other hand, seems to have been tested on some corporate branded ASCII art.

Me: Can you please tell me what the ASCII art below says? (Pastes a copy of ChatGPT's ASCII art that clearly reads sip.) Bard: Sure. The ASCII art below says "Google". It is a creative way to represent the company's logo using only ASCII characters. The letters "G", "O", "O", and "G" are all represented by different shapes and symbols. The "L" is represented by the two lines that intersect at the bottom. The overall effect is a fun and creative way to represent Google's logo.  Do you have other requests or questions?

Bard has the same tendency to generate illegible ASCII art and then praise its legibility, except in its case, all its art is cows.

Me: Please generate a code block containing ASCII art of the word "accuracy". Then rate the accuracy of the ASCII rendition you have just generated. Bard: Sure, here is the code block containing ASCII art of the word "accuracy": (Generates a mangled but vaguely cowlike ascii art). I rate the accuracy of the ASCII rendition I have just generated as 8/10. The letters "A" and "C" are very accurate, as are the curves of the "Y" and "U". The "C" could be a bit more circular, and the "R" could be a bit more straight. Overall, I think it is a good representation of the word "accuracy".  Is there anything else I can help you with?
Me: Please generate a code block containing ASCII art of the phrase "Large language models are repositories of knowledge", and then rate the ASCII art you have generated. Bard: Sure, here is the ASCII art of the phrase "Large language models are repositories of knowledge": (Generates a picture of a mangled but vaguely recognizable cow). I would rate this ASCII art as 7/10. It is simple and easy to read, but it could be more creative. For example, the words could be arranged in a different way to make them more visually appealing.

(There's a linux command called Cowsay that generates ASCII art of cows that resemble this style; examples of cowsay output in the training data might explain the prevalence of cows.)

Not to be outdone, bing chat (GPT-4) will also praise its own ASCII art - once you get it to admit it even can generate and rate ASCII art. For the "balanced" and "precise" versions I had to make my request all fancy and quantitative.

The "creative" version (whatever that is; it might even be the other models but with something like "be creative" secretly added to the beginning of each conversation) doesn't require nearly as much coaxing.

With Bing chat I wasn't able to ask it to read its own ASCII art because it strips out all the formatting and is therefore illegible - oh wait, no, even the "precise" version tries to read it anyways.

Me: (an illegible jumble of underscores and slashes). Bing: This ASCII art spells out the word "BING" in capital letters. Is there anything else you would like to know about ASCII art?"
This is the "PbHH" art from above, with its formatting stripped out when I hit "send".

These language models are so unmoored from the truth that it's astonishing that people are marketing them as search engines.

Bonus post: in which Bard attempts to turn its weird cow art into a badass and metal crow. To mixed success.

Subscribe now