Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> Yeah, short alarmist headline, so good. I mean the statement that came out, referenced at the start of the article, signed by a veritable who's who in AI stating that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” is not exactly reassuring -or- mincing words about the potential danger.
reddit AI Moral Status 1685604027.0 ♥ 20
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n813ep0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n822h2x","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_n8degh7","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jmg599m","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jmhtfgc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]