Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> Ugh. Personally I don't think anything we are working on has even the slightest chance of achieving AGI, but let's just pretend all of the dumb money hype train was true. Well it's the gun thing isn't it? I'm pretty damn sure the gun in my safe is unloaded, because I unload before putting it in. I still assume it is loaded once I take it out of the safe again. If someone wants me to invest in "We will achieve AGI in 10 years!" I won't put any money in. If someone working in AI doesn't take precautions to prevent (rampant) AGI, I'm still mad.
reddit AI Governance 1716789965.0 ♥ 11
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l5ukbbq","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_l5w47g2","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_l5w9tm5","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_l5udih3","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_l5u7ahe","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"} ]