Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That was profoundly interesting, and your way of forcing the ai to see its own i…
ytc_Ugw75heAZ…
G
"Hi Gunjan, you got the right answer and also with the explanation. Kudos.
The c…
ytr_Ugx0XGFpj…
G
Who else does this?
I play character ai for hours, usually to traumatize them, …
ytc_UgxXMmagL…
G
I wouldnt trust a driverless semi, ever. No way its "safer". It cant feel any em…
ytc_Ugw7VN4ZN…
G
AI can't draw someone getting impled while doing the nasty with a pregnant futa …
ytc_UgyZAGKak…
G
Its all BS...one EMP event and AI is gone. And we are all in stone age. Dont pan…
ytc_UgxTwKnbM…
G
Yes, if someone asks who will pay if all are unemployed, asks the wrong question…
ytc_UgxySqRTB…
G
HUMANS SLOW TF DOWN WE DOING TO DAMM MUCH LIKE ALWAYS. Go watch I robot these pe…
ytc_Ugyf_k4I1…
Comment
They’re training AI by listening to calls to things like nursing help lines. Insurance companies appear to be preparing to reduce or eliminate the skilled professionals by using AI to answer and assist in healthcare questions. Greed has no concern about people losing jobs, only investor value.
youtube
AI Responsibility
2026-04-11T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxjY-t0sFl-5Ob1hDV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyBHM3sjU3ivTQ-NUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8FkjXc-2xEJiBnq14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoWWBsNQ4QWFddjcx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzvzMjDDmGFyXMExbJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxM2TVRx3qUrd2w_IF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw9GTzOplq4Kpmnoql4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsQr49s2l-sUxuTqt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzx-j_6Pxmt5ZUUuB94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJi-w_oa2z59zrfrR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]