Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
/u/iamthetio asked: > Hello, and thanks for the AMA. > When we talk about the dangers of AI, we may be talking about the danger of having a self-driven car and its decision making, or a more general AI and whether it will lead to an AI+, and ultimately to a larger danger concerning all of us. > I am interested when philosophers (specifically) talk about the imminent dangers of the second type of AI, based on recent achievements (general Atari game playing, beating Go champion, usage in medical environments etc) and my question is: > What do you think should be the relationship between academic philosophers, who focus on how imminent the AI danger is, and the actual engineering behind the aforementioned achievements? > Should academic philosophers incorporate into their arguments what are the specific modelling techniques or search algorithms (eg monte carlo tree search, back-propagation, deep neural nets) and how they work when they argue about how close to the possible danger we are? If not, is the imminent part argued in a satisfactory way in your opinion? > Thanks for your time. Really happy that you are doing this AMA and interested to read all your responses. i don't know if philosophers are the best judges of just how imminent human-level AI or AI+ is. in my own work on the topic (e.g. the paper on the singularity linked up top) i've stressed that a lot of the philosophical issues are fairly independent of timeframe. of course it's true that the question of imminence is highly relevant for practical purposes. i think that to assess it one has to pay close attention to the current state of AI as well as related fields: e.g. in the current situation, to try to figure out just what deep learning can and can't do, what are the main obstacles, and what are the prospects for overcoming them. but the fact is that even experts in this area have widely varied views about the timeframe and are wary of making confident predictions. i chaired a pa
reddit AI Moral Status 1487781295.0 ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_de2k3ds","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_de2mgs2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"rdc_de2q9jg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_de2tjrw","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_de2txwo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]