Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Assuming progress continues, AI will become much more capable than humans in an increasing number of domains. To make use of this potential, we will need to give these systems resources. >There are lots of geniuses in the world buddy. Being smart doesn't make you more capable of taking over the world. Intelligence in this context means capability. Something more capable than a human in every domain would obviously be more capable of taking over the world. >There's no way to know that the president of the United States isn't a crazy person who will launch the nukes because he's angry someone called him an orange blob either. Which is why we have safeguards against that. We don't have many safeguards around AI, and there's clearly a financial incentive to ignore safety in order to be the first to capitalise on the potential AI offers.
reddit AI Governance 1708176946.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_kqt5ru8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kqu2y8v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kqtb3wm","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kqt78dn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kqtbky6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})