Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a ridiculous title (from the underlying source) and ridiculous descriptor. It makes people think of a switch on a robot. That is absolutely not what this is. This is "if things seem dangerous we'll stop developing". There is no physical killswitch. There is no digital killswitch. It's literally just an agreement.
reddit AI Governance 1716782172.0 ♥ 52
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyindustry_self
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l5ukbbq","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_l5w47g2","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_l5w9tm5","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_l5udih3","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_l5u7ahe","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"} ]