Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think a good summary of the mainstream expert opinion on AI is the [Open Letter On AI](https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence) from 2015, which was signed by Google's Director of Research, the founders of DeepMind and a lot of other AI researchers, as well as Musk and Hawking. The letter emphasizes the incredible benefit that AI can have to humanity if it's developed safely- arguing that "..the eradication of disease and poverty are not unfathomable". However, it also calls for more research into AI safety, and the attached research priorities lists a number of possible dangers, from government abuse in the short term to more speculative long-term problems like an "intelligence explosion". This article makes it seem as though Eric Schmidt and Elon Musk have two radically different views of AI- with one seeing it as entirely good, and the other seeing it as entirely bad. In reality, though, I think their views are much closer. I think their disagreement really comes down to exactly how much effort it will take to keep AI beneficial, and how likely we are to put in that effort.
reddit AI Governance 1527865647.0 ♥ 10
Coding Result
DimensionValue
Responsibilitynone
Reasoningutilitarian
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dzyft7n","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_dzxz0e9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_fal316o","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_fal7kg7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_fala5ne","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"resignation"} ]