Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@insert-name-here6777 so if it'd be a human that would take the art requests of …
ytr_Ugw70z3MY…
G
A yes/no is not an answer, billionaires, more than government needs to understan…
ytc_Ugz3sb2oZ…
G
Only if you accept this false reality by standing by and doing nothing shut it d…
ytr_UgxpuKAyT…
G
Great advice he stated “build useful tools, stop building agents”. Why more pe…
ytc_Ugw8RGiqF…
G
I don’t know why this guy is asking, “what are we going to do for 60 hours a wee…
ytc_UgyLQCwQo…
G
lol no, they have all the funds to buy weapons. Economy collapse is the worst ou…
rdc_kif8zoe
G
These jobs will not disappear until 2027:
* Customer Service Representatives / …
ytc_UgxdLAH4i…
G
All this AI doom talk is just PR from Big AI. They're nowhere close to a superin…
ytc_Ugxb3QQvm…
Comment
If an AI model says something like “I won’t shut down unless X,” and you call that blackmail, you might also think your email login is extorting you when it won’t give you access until you type the right password. That’s not sentience—it’s just code doing what it was trained to do. Judd Rosenblatt knows that. So why sell it like a Terminator teaser trailer? Simple: control the narrative, control the tech, control the money. It's not about safety—it's about who gets to build the next gate and charge you to walk through it.
youtube
AI Moral Status
2025-06-04T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx9Z3vlpnfCSYAsGTh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx3UIrT4hg2qv3lwe94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-dEcjYCTCHHOHZMt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxO6NdSU3cTm3kwzh14AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyK1QflWyq8KV1shLR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzc9FG0MDE4NjxwGKt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyKPO5kD-WeO1TGd8F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEdma-zfRyYB8zsSZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuyUCeLL2YmEJcK4B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzb2cEEUZVMCZrNvJx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]