Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Clearly the self driving car of today works like a savant. AI does not think lik…
ytc_UgwPzwohv…
G
The fact that as AI gets smarter and stronger it just turns antisemitic should r…
ytc_UgzKjE-VL…
G
Haha will he be leasing a super yacht from oil tycoons like he usually does? Pri…
rdc_esqiei0
G
The people complaining about AI are as immersed in the fake digital world as any…
ytr_Ugx_hf2Tf…
G
Please don't include me in your statement "Everyone seems to love ChatGPT....". …
ytc_UgyL-ihae…
G
This is so scary Elon Musk once said AI is more dangerous than we could ever ima…
ytc_Ugz3gYviD…
G
I agree that Ai FR is a bad thing. But that more non-white congresstypes were mi…
ytc_UgwL04CQ6…
G
For AI I say yes. For programable (controller) robots I say no. I’m not sure if …
rdc_dy4h89a
Comment
8:10 This is why people with brains in their heads and/or aren't delusional about progress for progress' sake are worried. Do men like this have even a modicum of history knowledge? Is he actually insane enough to believe good will win here or that good is even the majority? It's not AI that frightens me. It's the ridiculous species of the human race creating it. It's a little reductive I guess, but this is no different than "guns don't kill people. People kill people." There won't be a Mutually Assured Destruction rule for AI. It will be more insidious and subtle. We'll rot our species from the inside out, and the less aware, stupid, or immoral masses either won't care or won't realize until it's too late. AI won't blow us up, but it'll result in the extinction of the human spirit, which is effectively the same thing.
youtube
AI Governance
2024-01-16T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyR-gMBvt1z0HjKUuB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"urgency"},
{"id":"ytc_UgztHZWSnR3PcVmz4Cl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9f01lBDzYg2AWZXR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyTQSQkoMqGKOVSWuZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyI-myVXIBnnhm3Wcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkAlOVgHP-66Bf2o94AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzZngtzSfeFtjtoLq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgymkLH6szzXt7LahQB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzzh7aqDjUpTo_EQWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYFRhoJy9-uojNroB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}
]