Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI cannot cause mass casualties. It is simply not possible for any current AI to CAUSE any of those things, because GenAI only produces text and images currently, unless someone puts it in a mobile chassis of some kind. Of course, people who manufactured steak knives would be furious if they were to be held liable if someone cuts someone's junk off with them, and rightly so... And would box cutter manufacturers if they were to be held liable for when people hijacked a plane using one. I would be fully supportive of laws that would require licensure for humanoid robots, and which would place liability for action on the licensee and require insurance to some high liability for misuse... But liability ought be borne by the person who has responsibility for creating substantial physical action outside the isolated states of a computer system's memory or pixels on a screen, and controls ought be placed on *actions* pursuant to other *actions* rather than mere twiddlings of bits and lightings of pixels. A gun is dangerous because it is the last and final tool necessary to allow ANY agent of ANY kind capable of holding it to do violence. An AI can, of course, teach someone how to make a highly radioactive nuclear reactor in their shed. *So can the Wiki page on The Nuclear Boyscout*. What it cannot do is actually make it any easier to make that nuclear reactor because *nuclear materials are tightly regulated and scanned for*. For pretty much any dangerous act, the reason they do not happen is not because the act itself is obfuscated or arcane or esoteric in any way. The reason dangerous acts are difficult to make happen is because *we control the precursors heavily*. That's how you prevent dangerous acts... By regulating chemicals and materials, not by regulating glorified half-baked textbooks.
reddit AI Responsibility 1724537707.0 ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ljr04dm","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_ljr5ox8","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_ljrrwo9","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ljrs58c","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"rdc_ljuigro","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]