Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love Hinton. He’s so peaceful but it’s kind of like he had a bad child. His ch…
ytc_UgwW5Xp5t…
G
>Is there such a thing as a Korea town? Do we have one in Toronto?
Have you …
rdc_clv71ib
G
No matter the good they might do, I just can't see why people think AI is a good…
ytc_UgysNiNI8…
G
That making a neural network by hand bird beak example is amazing it suddenly fi…
ytc_UgwFVE7GN…
G
I’m curious if we can do anything to the system to ruin some of the training. Do…
ytc_UgwwqCpkh…
G
Given how AI is only written code and cannot actually think, this sounds set up.…
ytc_UgzrpQExa…
G
"we need to fix the AI"
No, stop using it for anything other than trivia or hel…
ytc_Ugw0OnfoF…
G
2 min silence for those who consider them self tech reviewers and then feel amaz…
ytc_UgyJSK6dK…
Comment
Regarding the comment about people who think humans should die and be overtaken by AI: I don't think we should all die but I do agree that humans need not be the end-all-be-all. I believe that some of the suffering in society is fundamentally tied to the fact that we are in human bodies with human brains and old evolutionary imperatives. Such problems won't be fixed merely with better institutions or education; they are impossible to solve unless we learn to make substantial modifications to our bodies and brains or are overtaken by another species
youtube
AI Moral Status
2025-10-31T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy352lDkj3E40ABTPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyX7eo-uBkMrZ3D9zl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyqEnkkOba6Rc-0kkB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykaBsAKWzANf78_nB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7PXWuFqtYSuAETC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5RvOiYN8A2YddYUJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2P55-9EZRxrm-s9R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjEaO7SUA096JPSxB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEX_FhsbfY0EuN3l14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztzVvcq-E-XJa3_Jl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]