Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not about "stopping". It's about keeping safety in mind. And it's about not doing a few specific techniques that *look* like they make the behavioral problems in AI go away - but in truth, only reduce the rates at which they occur, and conceal the remaining issues from detection.
reddit AI Governance 1752761279.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n3s6itz","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_n3mtswu","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_n3regrz","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_e2vvacc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_e2vxuqz","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]