Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans need to unionize against Ai and robots before it's too late. Nerds want …
ytc_UgyP95xHK…
G
At this point the heads of Tech companies that want to train AI on other people'…
ytc_UgwSEqZY0…
G
Just say it, their ignorant. Ive spoke with AI abd specifically chat gpt. It can…
ytc_UgxDuNz8F…
G
Of course AI is far more dangerous. I am a robotic controls electrician. I teach…
ytc_UgzH8Hu96…
G
So AI robots will be more productive, more intelligent, more reliable. Then we …
ytc_UgyDFngGB…
G
Elon musk is dumb he want to do an evil plan we should put him in jail for makin…
ytc_UgylF-mjB…
G
These ai things can get the tone and pitch etc. But they suck at inflection. It’…
ytc_Ugyu0YpAq…
G
This makes me angry. It isn't even remotely close to being sentient. He is proje…
ytc_UgxYYSUFN…
Comment
"Ai says its 100% match" .. Well, my human eyes can definitely see these are not same person at ALL... Similar trait, nothing alike tho
youtube
2026-02-13T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgygE4ML7g_4B3p4PcJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyREVx0lodwha8dlRV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzTJp_utxawtI344yl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyx4jGExMASv1PHoOV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPoLscwbLfEhNCehh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxlOT9v-F1yd7zKmKJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzul2LaHt-MFenikf94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugziazljoqn7PdobHax4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwFTupZhR_PrJaCYdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7KaweyIS6wcyVxhR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})