Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This sentiment makes no sense to me especially after the comparison to nuclear weapons. How is it reasonable to expect the US to slow down development of AI if it's powerful enough to destroy humanity? This would have to be worldwide agreement because if we don't do it, someone else will. It's one of the many reasons the nuclear bomb was created.
reddit AI Responsibility 1710786500.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningutilitarian
Policyregulate
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kvg24c2","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_kvg3s6x","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_kvgnw5q","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_kvgrpb8","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}, {"id":"rdc_kvh5lo8","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"indifference"} ]