Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI is a bad decision in the first place. We've all seen what happens in …
ytc_UgiltTSEW…
G
Humans cant even decide what our own interests are, how can we expect an AI to d…
ytc_UgxeVF3QO…
G
I have been looking at the AI coding tools as when you are undecided to jump at …
ytc_Ugz2Cbf-D…
G
To adapt to the emergence of AI's as a the standard source of low precision info…
ytc_UgxsNlavU…
G
I already know Ai is a hard core communists run to enslave humanity of digital i…
ytc_Ugyj0vBBk…
G
have depression and find ai immensely helpful and supportivethis kid was facing …
ytc_Ugxz851fu…
G
That is ONLY if everything goes right. We could very easily end up in a dystopi…
ytr_UgyvdfN-R…
G
If it's done this way, basically humans still can't fully accept AI as a combina…
ytc_UgwkYba6q…
Comment
This is really hilarious. I am not panicking right now even though AI has actually destroyed whatever work I still had of the journalist. I'll probably have to use it, but it's no fun. Actually the writing was fun, I didn't give a damn what I wrote about, because the writing itself was what was fun. But let's see where this goes usually there's some sort of glitch in the huge machinery somewhere and suddenly we realize that all of it was bullshit.
On the other hand, maybe not
youtube
AI Moral Status
2025-06-15T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxcxK_wkl3R2Nhm7iV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzgoKnMfkAsIox9O654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHOK_sjjK4tnnBeZx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxscWInJQ1TSm70qSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfqpsEW-vD17aFp0V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzZHYi_qc6dA5SZmrB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGWS5KccA82zpnbwB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxisqPXHdAQbNqlyld4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyI4Xv0-mfmuabHjd54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyneN72X0eVPp2Fec94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"})