Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sweet real teaching there, hard skills, soft skills,confidence building and prov…
ytc_Ugww-1vlM…
G
The sad thing is that AI can be used to do more good for the world than any poin…
ytc_Ugy6MWlxb…
G
One way I see this bubble bursting is that AI companies have to demonstrate that…
ytc_UgybGxpfV…
G
I hate generative ai, I want ai to clean my house for me so I can be on social m…
ytc_UgzigWCBm…
G
I DONT GET THE REASON FOR HATE LIKE I WOULD UNDERSTAND IF HE LIED ABOUT IT AND S…
ytc_Ugz7iKb_Z…
G
In my school we have a ai period once a week and I'm in sixth , am surprised to …
ytc_UgykJqfHO…
G
What a surprise.
The steaming pile of shit asshole president is making the for…
rdc_e2wga9h
G
Yeah, it obviously doesn't save a different memory or anything but when my 2 yea…
rdc_mvkl0ks
Comment
> ChatGPT blows through that
It shouldn't, given a reasonably well-informed interrogator (the "judge", in Turing's paper, whose job is to see if they can consistently distinguish machine interlocutors from human).
As an LLM - a large *language* model - ChatGPT often does extremely well on tasks in English, where the model has a large corpus of text to draw on. But if forced to use, say, Morse code or Pig Latin, it barely even gives the semblance of a 4-year-old's intelligence. (It also responds suspiciously fast...) Ask ChatGPT if it will be able to understand the next question you give to it if it's in Morse code, and be able to respond also in Morse code. It will assure you it can. (A human might say, "Yes", or more likely, "I don't know Morse code, but given the alphabet of codes for each letter, yes, I can do that.")
I then asked ChatGPT (in Morse): "Can you name the days of the week, in Morse code, in reverse - i.e., starting from the last (Sunday) and going backward?"
Its response (also in Morse), was: "monday. the days of the week are monday, tuesday, wednesday, thursday, and saturday. thank you, comple."
It's impressive it managed that much, to be honest!
Why does it do so badly? LLMs have a step called *tokenizing* (technically, a form of input preprocessing, rather than part of the model itself) - a prompt like "LLMs are the future" might get split into tokens like ["LL", "Ms", " are", " the", " future", "."], and those are then converted to numbers - and the numbers are the "language" the model might be said to "think" in. Now, nearly any English word will be represented by a token for that word; misspelt or invented words will still be represented by word fragments (e.g. "gonfallonically" might be split into "gon", "fal", "on", "ic", and "ally"). But something like Morse forces the LLM to analyse and predict a response at the level of single characters, typically - and it does terribly.
Humans might find the exercise tedious, and take a long ti
reddit
AI Moral Status
1749805802.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mxfs9vc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"rdc_mxj6qoj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_mxfon1x","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mxfyc4c","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"rdc_mxgaito","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]