Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like literally it makes ai slop that sucks so much I have to stop myself from pu…
ytc_Ugwbjf_Ht…
G
AI (in its current state) should NOT have personhood.
TLDR; Humans are the cau…
ytc_Ugz566HDV…
G
fun idea ai image generater only trained on poisend art see what it is going to …
ytc_UgwtJBCUf…
G
AI is going to ruin everything. I'm telling young people to say "fuck you" to t…
ytc_UgzZctbTw…
G
Even when you have an autopilot that is safer than humans, that actually makes i…
ytc_UgxbQJn5b…
G
Predictive policing ?!? Hmmmm sounds like The Movie " Minority Report" was alrea…
ytc_UgyiQwLGM…
G
AI don’t work like that. If you asked them stuff like that unless their text is…
ytc_UgwNG_Dfp…
G
Isn't it funny / scary how America is repeating the same racist arrogant ignoran…
ytc_UgzN2hNDp…
Comment
AI doesn't have its own interests, AI has human- programmed interests that combine in human ways, just very quickly. Super intelligence is just fast computations. When you say, help maximize human happiness, you touched on the problem. You play a game and like cheating. Whether you code cheating or not, your overall code structures carry your desire for sin.
youtube
AI Moral Status
2025-10-31T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjyMyY_O4NgeZAJjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxmTHu1yRq14lt75Dd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNWZ7nhSCvpXJJsDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxT7RhFToA3B5KS5el4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwBfFsr_6_n16hJfed4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxm7-V2cw080X9sQZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCiYAK2ms5Q0A5qhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxBMJKI2GG-3mj8Qi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyKp43VLPuelxIF9Kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxKjYNeaaZSElY40Qx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]