Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We still could very well all die - AI alignment is an unsolved problem and if you thing an AI that is as smart as a human can be made, that’s when the problem starts. It’s not talked about enough but basically, tell AI to make world peace - it kills all humans. World peace achieved. It’s really hard to give an AGI a goal and A. Make sure you train that goal correctly and even B. Even if you do train it correctly, it’s very easy for it to do something you don’t want because specifying goals is hard, and that’s without (A)
reddit AI Governance 1757094517.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ogr7ytr","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_ogt0jyf","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"rdc_ogsdhci","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"rdc_nclese3","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_nck5eay","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}]