Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Excelent question, but I'd like to add something. Recently Nick Bostrom (the writer of the book Superintelligence that seemed to have started te recent scare) has come forward and said ["I think that the path to the best possible future goes through the creation of machine intelligence at some point, I think it would be a great tragedy if it were never developed."](http://www.ibtimes.co.uk/nick-bostrom-it-would-be-great-tragedy-if-artificial-superintelligence-never-developed-1501958) It seems to me that the backlash against AI has been a bit bigger than Bostrom anticipated and while he thinks it's dangerous he also seems to think it ultimatly necessary. I'm wondering what you make of this. Do you think that humanities best possible future requires superintelligent AI?
reddit AI Bias 1438016751.0 ♥ 442
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_lv8lnbd","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_lv8cgsc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_cthw656","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_cthxq37","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_cthzy1i","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]