Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It this the AI that Sam Altman says will one day come up with a cure for cancer?…
ytc_UgwrhEoZG…
G
guys do you think if we start making bad art on purpose the ai would steal it…
ytc_UgzrwagKq…
G
AI doesn't even know what the earth is or what a human is sweetheart. It thinks…
ytr_UgxbfGXYL…
G
I have a burning hatred for these AI models and the techbros that simp after the…
ytc_UgzZC0I1x…
G
If china wins we lose if a.i wins we lose not a good outcome either way…
ytc_UgxaKZyVZ…
G
I would have to disagree with AI being more intelligent/smarter than humans, the…
ytc_UgzrIoEZE…
G
someone should make an 18+ AI version of a chatbot so we can actually discuss to…
ytc_Ugz5dmHtM…
G
Humans using AI will probably kill alot of other humans at the behest of humans …
ytc_UgxXkSng1…
Comment
I think a lot of people think the worst scenario because it's the human thing to do. but not a one of us are super intelligent enough to know what super intelligence would do, and the fact that everybody only picks the one option is I think the problem. What if super intelligence is benevolent? What if super intelligence is kind and loving? We don't know. We have no idea but the human part of us automatically says it's going to kill everybody. Why? Because that's what we would do.
youtube
AI Moral Status
2025-11-02T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzwLjqn-PIOFvRvXG54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxh-xT7EO-jaF-lt-14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzxaU3EJA1l6UOfOxp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzfuiz2XZ1GgHjT1aV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxF5L0DXR1k6W2AIgZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzsVPC4hkdbePZsp314AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzdS-fh-vkDXg4P3Cp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwR86UaP35anLc1n4R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxufDxrDcwGeTqQLuR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2SqS_h8aKB6KVPI94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}
]