Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not really odd that the first things we set AI to take on were creative, because those things are low stakes. The AI can fail and it won't hurt anyone, we'll just have more bad art. We've had bad art for years without it doing more harm than just irritating people. Things like water treatment, cancer treatment, energy generation etc. have to be done right or else -especially if it's nuclear. I don't know when I think the singularity will happen -I do love hearing people's guesses and why they guess as they do, but what people really don't get about the singularity is how quick it could 'take off.' With the connectivity of today's world, once one AI gets inspired it could happen if not over night, then maybe over a long weekend.
reddit AI Governance 1673581930.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j43ltfn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_j42nu3a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_j42oh54","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j42tdj9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_j44rl9l","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]