Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It also assumes that such a threat would be a result of a single monolithic system. Or an oligarchic one. I can't remember the name, but one science fiction story I read, hypothesised that a more likely risk of AI isn't one of "AI god hates humans", but rather "Dumber AI systems are easier to build, so will come first and become ubiquitous. But their behaviour will have motivations that are very goal orientated, they will not understand consequences beyond their task, their behaviour and solution space will be hard to predict, let alone constrain, and all of this plus lack of human agency will likely lead to massive industrial accidents." At the start of the story, a dumb AI in charge of a lunar mass driver decides that it will be more efficient to overdrive its launcher coils to achieve _direct_ Earth delivery of materials, rather than a safe lunar orbit for pickup by delivery shuttles. Thankfully one of the shuttle pilots identifies the issue and kamikazes their shuttle into the AI before they lose too many arcology districts.
reddit AI Governance 1716800763.0 ♥ 11
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_l5uxame","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"rdc_l5vvkfi","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"rdc_l5vzh5a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"rdc_l5ughy0","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},{"id":"rdc_l5uyz9e","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]