Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would trust an AGI over any human being any day. The reason humans are so afra…
ytc_Ugx_8vAbE…
G
Skynet isn't very far away. 😮 The robot hotel is a good example of the future of…
ytc_Ugwrdn4OO…
G
Quick thought: if AI will have this nearly free access to the web and can also m…
ytc_Ugw1nY-wn…
G
AI is already biased to their makers and tell lies ALL THE TIME. I dont trust AI…
ytc_Ugz40v27X…
G
It will vehemently resign from solving conjectures, because that is what the alg…
ytc_UgxlpOsqx…
G
I might be an optimistic, but I think there will be a weird equilibrium between …
ytc_UgyRIs7V-…
G
Ai is more like an alien invasion than just a new technology. We've created a sy…
ytc_UgyrrqyGj…
G
but we can use AI in mobile and earn money from mobile and earn money and full f…
ytc_UgxiJRzyg…
Comment
Humans are so stupid. Think of the layers in this process. Power addicted, money mongering humans developing something they are terrified of, but can't just stop developing it because those humans won't actually mature to fix their problems and so are terrified of other humans developing their AI. When the answer is, no more competition, no more boarders, no more power. AI isn't going to destroy humans. Humans will continue to externalize their issues until they can make something different enough to scapegoat and carry out the self destruction for them. Just stop worshipping money and power. Delete the AI.
youtube
AI Moral Status
2025-12-15T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzjz1QFpg3WKePizjt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwk0l1gVXM3Vx78e0J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx3QADr93yUv_WG97R4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw284vwrVV-jM6AKUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8Q7j5GZ0KXVvpAap4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrDbWjW9ea_eKaELp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy50pGsAapeF7YOa6Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKLXGlQWu8ss88QF94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVbOfCbpVYf3BwkKJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgykVLsKRSH31WXUewZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]