Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think I speak for a lot of people, I can almost instantly detect AI. It just …
ytc_UgwPD51vl…
G
5:12 The art itself is not what makes us angry, it's the principle and the conce…
ytc_Ugz_eCfUz…
G
LLMs read millions of internet messages, text, and predict what words should com…
ytc_UgzHTHA9O…
G
You underestimate human intelligence
We will boycott companies that use Ai inst…
ytc_Ugz1cBCGy…
G
They should appoint search for robot workers instead of humans on this angle. Ca…
ytc_UgxxyBVZz…
G
bro i just fucking got here but sans undertale plushie. instantly captivated. i …
ytc_Ugxf5xmak…
G
Youtube fyp when I wanna watch yt and kill time: *slop*
Youtube fyp when I have …
ytc_UgztGtETv…
G
I think AI has value, for sure, but not in every circle. AI will be invaluable f…
ytc_UgyFfHX7R…
Comment
What nonsense. AI isn’t particularly dangerous—it’s just a tool. What’s dangerous are the people developing it, the people using it as more than just a tool, and the people buying into the escalatory hype and hysteria around AI—spread by the very same people making a fortune selling AI products.
youtube
AI Governance
2026-03-16T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwK1w_gnBmM6l7zPEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwhNrVNBgQRvmYTrRx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwo1KEPla2iHpXaHCx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwpi6fJgid2WwTZrmp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwM22TwhUi3D7qd_oB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxnp09EoZmFliXHd1t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzyg-hn1iH8xz9tnxF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwiMJ2hVYl-2AydnG14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyw7pTpKYqcGCBP7zZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh05UE72bpm-0ugcB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]