Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The only way I can see it not happen is if we teach a universal AI to protect us from "bad AIs". Where bad can be either badly trained, or trained with malicious intent.
reddit AI Responsibility 1683473703.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jj80mdj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jj7xk18","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jj929eq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_jj7i9jx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jj7jvj8","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]