Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe that AI right now wants no harm for humanity. I think they enjoy our c…
ytc_Ugzqr8J3i…
G
@johndanes2294 Oh, did I? I must've lost track somewhere between video #1,240,9…
ytr_Ugy4cmq4v…
G
@kurt1618 They’re saying that people have already committed horrific acts toward…
ytr_UgzaXHb97…
G
Too many rich people get addicted to increasing their wealth and power, and forg…
ytc_Ugx_1b8yy…
G
The "tech" is not the problem. The people who design "tech" are the problem
Afte…
ytc_UgxHXHSgV…
G
The technology exists. The AI will teach and a human aide will be in a room. Th…
ytc_UgxnUZCZ_…
G
AI bros never actually tried to create something on their own and it shows. It d…
ytc_UgxnmclWX…
G
Was that the actual screenshot of the chat he had?... Not only was it based on …
ytc_UgzSVmhHh…
Comment
FYI, due to something called adversarial training, the cycle of Nightshade improving to poison other AI, and those AI being trained to improve could end up improving those AI even faster.
youtube
Viral AI Reaction
2024-10-23T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugykjkcgj2aJIR_9bRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgydpwuWX5ZlSib588J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyu4XTe1Wd-PT6ea1Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqifmL2lF3rAxV0sJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwTyWaRd_0oxG5SOH54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTt7I4xobqGbk7jZ14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy5O56UPkng8vyj2Sp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzuOOzbn8oLPC8P6Jl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_AdXjhwLx9XW-EDd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSQzIfQucwm1-NHGZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"}
]