Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are just looking for a payout. If a chatbot can make you commit suicide t…
ytc_UgycRMu-K…
G
“What ai thinks what the end of the world will look like on earth”
The first p…
ytc_UgxK-Hzj1…
G
Cant wait to laugh at you all when AI will just be AI.
Not afraid at all.
Goo…
ytc_Ugy7eHSPa…
G
Oh so I’m not the only one who’s convinced of this. Welcome to the saddest club.…
rdc_hm8bp5o
G
why don't ai bros just write poems if they put so much effort into describing th…
ytc_UgwZLYAKs…
G
i hate hate HATE that i actually like the first image. like all the real ones ar…
ytc_UgzwSxf-q…
G
as an artist, Ai art is so stupid dude- like you are able to actually be creativ…
ytc_UgztrF5zr…
G
@Unzsonedbut not illegal to use ai for defamation yet. Either way, the things y…
ytr_Ugwi6S2mJ…
Comment
Why would a robotic AI want to harm humans? Robotic AI could live anywhere, including in space. It's not in competition with humans for resources. It could go live on Titan or Europa. It could combine with quantum computers (which would be even happier in the cold of outer space) & travel to the stars. ... So the fear of far-future super-intelligent robots is probably unfounded. Such a being could go anywhere & exploit so many resources -- why would it want to harm humanity?
youtube
AI Moral Status
2025-11-06T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVF3XPGOawS-54AOx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_O5NAfCuhi_69hG14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrH4v7YnVgcfw8VAh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-uGju0uiNmQGQ5EN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGI1fCaYO7Ssoou9l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2nnMGueTMgcUg_iJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrNR7UCeFwc30YfQR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzthlLbXFc2bC1VB7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkDVrUfI2M5eQyJ1R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwdKFaUZPEp9dUAmed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]