Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just found you, a random reccomendation on my front page... Love your talking vo…
ytc_Ugxiw_-Bu…
G
I don’t understand why the non AI leading countries, all countries other than US…
ytc_UgzfwCM5O…
G
I actually own a company which addresses this concern.
nexdata solutions builds…
ytc_Ugw3PQPMU…
G
if ai decides to take over, first thing it will probably do it get rid of the re…
ytc_UgzknBUmy…
G
Why y'all so mad at them? 😂😂😂 You are yourself saying that ai art doesn't threat…
ytc_UgyVU455r…
G
AI is extremely stupid to the point where it can't teach you to do long division…
ytc_Ugwj2tDLg…
G
I bet if those arguments were put right back into ChatGPT, it would've caught mo…
ytc_UgxvWZY0V…
G
Please beautiful phychic woman please come to rescue me so all of your fine ladi…
ytc_UgzsZf0Lx…
Comment
14:57 Yes. Hallucinations aren't an AI making a mistake, hallucinations are an AI lying. Because their goal is to convince you they've given a satisfactory result.
Saying "This is the answer: [Truth]" is success. Saying "This is the answer: [Lie]" is also success. Saying "I don't know the answer." is failure.
youtube
AI Moral Status
2025-10-31T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzc3FoPlmUo13BjPY14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkjE5TvWv7DeFuViF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5PtLrX3BuN2PtF-54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfUu7tZNYOzfxMjRF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHLr-umR1_GpE6nKJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymNPOjttGRoP6gWWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0FpkSc1Ljjwgy7Ux4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwudaEM1sWDSMh8F8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyE7nbis9oK0bLu-Wh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwt0ssXHCnyjjWW5Ql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]