Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI doomsday scenarios are blown way out of proportion and AI won't become sentient in the forseeable future (next 100 years-500 years). For context, I am well versed in various forms of ai and robotics given my educational and professional life. Most people who speak on the subject (even so called experts like Elon Musk) are not well versed on the subject matter. The reality is that computers can be trained to make good guesses on narrow fields and draw good conclusions even better than humans in narrow applications but they are terrible at understanding the world when it unstructured for them. Even robotic driving is very hard and think how structured a road system is compared to literally anything else in modern life (lit roads, lines, traffic signs, gps, lights, rules, etc). So if you are worried about AI taking over... it wont happen for a long long time. If you are worried about robots taking your job, frankly you should be concerned if your job is highly structured (ie truck driver, delivery man, warehouse worker) But any job that requires artistry or adaptability is safe for the next 100-200 years (contractors, repairman, tradesmen). Now one last thing to make you feel better.... if AI sentience scares you ... does human sentience scare you too? There are literally 8billion humans on this planet and the vast majority are incapale of clothing themselves, feeding themselves, or even speak more than 1 language. Point is.. if humans are any indication...when robots become sentient... no guarantee they will be good at anything other than complaining about shit.
reddit AI Moral Status 1674084950.0 ♥ 11
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j4y4wtz","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j4yvjvh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_j4x7aqk","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_j4xlglq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j4xdc74","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]