Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yup. Even in the very best scenario where we build superintelligent AI and nothing goes wrong on a technical level, how slow or fast the transition phase is makes \*all\* the difference. If we get recursive self-improvement on the timescale of days or weeks and everyone loses their jobs pretty much at the same time, then we have a shot at emergency-overhauling the system into some kind of ubi. If instead jobs are replaced slowly one-by-one, industry after industry, on a timescale of years, then the transition period will be immensely painful, filled with half-hearted political posturing that doesn't really help anyone. People will lose their livelihoods, their homes, their ability to feed themselves until eventually we reach a critical mass of poverty that can't be ignored. I'm not looking forward to it.
reddit AI Moral Status 1746974659.0 ♥ 379
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mrrp09t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mryfb84","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_mrwp9ko","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_mrr4nn0","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_mrr7zk4","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})