Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly imo the biggest risk of Ai doing harm isn’t a smart one, but a dumb one. Right now dumb is all we have. They don’t “think” they search and regurgitate. It does whatever it does without actually “knowing” whether it should or shouldn’t. A truly intelligent/sentient ai would be more likely to look at us the same way we look at animals. (No the irony isn’t lost on me) Predictable life with obvious fail states. But right now all we have is generative. It makes mistakes an can’t even tell. Ai weapons is the real issue. A silent weapon you can’t trace or take responsibility for. But meh not like I truly know. I always treat it as if could actually think or feel to begin with. But I think greedy people using “dumb” ai will definitely ruin us an destabilize any idea of safety we like to think we have. But yea. Idk. I guess no one does but I definitely don’t.
youtube AI Moral Status 2025-12-16T19:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugyzk5RcKcF4Y69ZxCx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzp4PvqydJKmblSibB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxdr8bzDSb90inH4Q14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzmuQsUxxV7p1q8KkZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwtvR_eyp_RO9YB6wt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzjqqlMHr0n_R7DiQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyvr3tc8fieR-JeJfx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxxfYNrM-SoboiK0fB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyeRAn3_UwkOD9YuFd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgzCkU7Ij7_XzVefvNt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}]