Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Youtube fyp when I wanna watch yt and kill time: *slop*
Youtube fyp when I have …
ytc_UgztGtETv…
G
People that instantly default to AI are simply abusing a shortcut, a shortcut th…
ytc_Ugx4TqLx5…
G
Those who fail to learn from the past are doomed to repeat it. Just look at what…
ytc_UgzFGHI2y…
G
You support Tech Bros when they say Niteshade might work, but you don't support …
ytc_UgzOh-sLV…
G
Yep, you should see the autonomous taxis in San Francisco. They are amazing but …
ytc_UgxVHyMUw…
G
Why does Geoffrey Hinton think electric cars are good? Surely he doesn't still …
ytc_UgxK8dt5g…
G
I this guy made some really good points. I completed disagree with his opinion o…
ytc_UgxV_Fb85…
G
I mean I understand the legal loophole they're using. Honestly most of this stuf…
ytc_UgzE5SdZG…
Comment
People, stop and think for a moment. AI does not think, it is bits on a data center somewhere. It only executes instructions. The only way it would kill us would be if it were optimizing for less pollution or something and realized humans polluted the most and it was given the nuclear codes. But it is not currently trained to optimize for such, so we can sleep safe. A better video for that would be the one Cleo Abram did, especially in minute 5:48:
Spread the knowledge. Stop the fear.
https://youtu.be/MWHN6ojlVXI?si=2YXoOYHeZCdfnzUF
youtube
AI Moral Status
2025-04-30T03:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzernVDoT2vNj5Bf2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHi1EfOUL1lok6qtl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxNuIM5nYiawrmFiYN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwxQs1c29A-eHZYqYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGnfxEk7LjgXE_1gl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy6_ixvwuiknDz3-qB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx1TjPCoXECH3DdK6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwryk8hdwO66O74eKl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxIQ9oCQb9Xfh1Tczx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDH8nmsCfM6SPohdB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]