Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The practical difference is observability. A PC made you faster — it didn't make…
rdc_oi3f6qn
G
I swear pro-AI arguments are sounding so cult-ish now. "WE WERE NOT BORN WITH TH…
ytc_Ugyxp7Y2U…
G
Well truthfully every owner operator needs to park their enclosed trailer and in…
ytc_Ugz7aK4dZ…
G
as a software dev working daily with it, we are still far away from ai stealing …
ytc_UgzYBrK9k…
G
I have a question
Why do I always get raped by regular ai's
Like bro wtf 😭…
ytc_UgwrSAGID…
G
@cendrapolsner8438 Thanks for your time. I was just wondering, given it would b…
ytr_Ugw1rfDuC…
G
don't call them AI "artists" they're not artists, they want you to think their a…
ytc_UgwfPcWRl…
G
I've been into painting and drawing a big part of my life as an amateur, and rec…
ytc_UgyTKqA-8…
Comment
AI developers warning about the monster AI and how it's a good chance it will destroy all life on Earth....continues to develop AI. This isn't some unavoidable cosmic threat, like our sun going supernova. We can choose to stop anytime. It's not the AI holding a gun to our head, It's the developers that are doing so. This tells me that they are either hyping this up for clicks and it isn't true, it is true and they think they can have their cake and eat it too by being "Careful", or they believe it and eagerly want to be part of the process killing off Humanity.
youtube
AI Moral Status
2025-12-15T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx7JNfbcvWlgsraDR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyYPftS1TpOsFeHs0F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzF2eBZVfkglB_garB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwncipLIvZXpDP72fN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxjJEOrEoSXOjjCqqF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyyY1SMsloxpUoPCct4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxYWbzmNW9IHlBD1J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyqQ59snl980pwFDL54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzeeuIipDwmUxx84hd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyuEWDxovxc8xKaQpN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]