Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How could AI not jeopardize human safety when we build AI to look like humans?.😂…
ytc_UgxiP3t4q…
G
This is not an argument that makes sense when people are using AI, with stolen a…
ytc_Ugyq8kcMO…
G
I don't understand why everyone tries to complicate the issue. The issue isn't i…
ytc_UgyhsNprk…
G
Talking to a female chatbot will never land a guy in handcuffs and a prison term…
ytc_Ugxm4aOrT…
G
ATTENTION EVERYONE ]
—————————————-]
THERE IS THIS WEBSITE CALLED DEEPFAKE
IT I…
ytc_UgzNZObJy…
G
You sound like an AI about yourself since this is probably the 10th time I've se…
ytr_Ugy056FzT…
G
2:20 I’ve been on both ends of the gpa spectrum. Students with a higher academic…
ytc_UgynT_ZXv…
G
In the near future lots of people will make money as AI slop clean-up consultant…
ytc_UgzdDT3Qz…
Comment
Seriously, no Lie: I asked Goggle AI about the current Gold/Platinum ratio. It gave me an incorrect answer—an answer that would have been correct many months ago. I then asked Google AI why it was wrong about the current Gold/Platinum ratio, and it then gave the correct answer and literally had excuses about old data sets. Remember this strategy: Ask AI why is it wrong. AI does not like to be wrong, and has an almost existential crisis—for real. But don’t play this game, unless you have the goods. I have asked AI about the inferred holding of a particular mining stock company. Specifically, I asked about the Silver holdings of a Gold/Copper mine. AI at first said “none.” Then I said that it was wrong, and it came back with the right answer. This is no joke, no lie. These things actually happen. Now, apply this to many things, but for example a real estate question.
youtube
AI Governance
2025-12-29T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyrMkvBqhNlKrYJt2p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyiXeVhaXiXhKVEyn54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeM1kxR_m_ePuRgbN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzG_Dmavffk1zgYRJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx1m_XzS5UW8DcWeb14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgymX1PqG3vFhtl9b3x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzQkrkzoL0zwJIob694AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-dtRB-5Pj-9rj93V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9g5d4KZ1-h0IOFN14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyloTJ_ly2LXBcvLHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]