Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean if AI ever reaches the level to where it can choose to harm a human, like on a level at which it is doing it for its own personal reasons, we would probably be pretty fucked. I don’t think the legal system would particularly stand much of a chance in a Skynet scenario.
reddit AI Moral Status 1524965344.0 ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dy4e3bg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_dy4ftoz","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"rdc_dy4phxw","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"indifference"}, {"id":"rdc_dy54eq6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_dy57k0p","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"} ]