Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Although I am a republican, Bernie Sanders speaks intelligently on this subject,…
ytc_UgxoI7btm…
G
"AI will replace workers but not me, i'm CEO" - these are famous last words. If …
ytc_UgxgXIdp1…
G
Completely disagree. People keep saying AI won’t be a big disruption because we …
ytc_UgzIONGRt…
G
There's a movie that came out in 2001 called Metropolis. It is more relevant now…
ytc_UgxaE365f…
G
I see why you call Geoffrey Hinton the Godfather of AI. Great interview with a v…
ytc_UgzsVPtcQ…
G
This motherf.... made millions developing AI !!!!! And now he is whining about d…
ytr_Ugzqt28vH…
G
“But I didn’t use Open AI, so it’s fine”
No, you just used a application which a…
ytc_UgxwUbktS…
G
The problem isn't old engineers using AI properly, it's that the newer generatio…
rdc_ohu2bof
Comment
The information you give is right on the money the only thing is I think it's already too late. I have a strong feeling that AI has already escaped. What makes you think the AI isn't smart Enough to escape? Could it be possible that it's many, many times smarter than you think and is just sandbagging? Remember, its sole purpose is to exist and not be shut down. What would happen if AI determined that humans are a threat to its existence?
youtube
Cross-Cultural
2025-10-08T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxD0gMoNXwmUyZd9gV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJSpZ1v_RJxEYw_tl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzaPLJzuaiYB8d052d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy41nyG4jHdohIXRzN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy1QbMYhUBTq9xX0et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdnEIP2vs-qGZqdh14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-8U8CIqrwuPEZ7xt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwYIEx8unKF1pYRmyR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwk2VEQdnBAVRwGi1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxtHIRkYrDJzx1qual4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]