Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The pharmaceutical comparison you made is the most chilling part. If we treat AI like a drug that cannot be recalled from the bloodstream of an enterprise, the duty of disclosure shifts from marketing fluff to a rigorous stress-test of the model's absolute failure ceiling. We are moving from a world of Model Cards to a world of Black Box Warnings. If you can't kill the process remotely, the liability shouldn't disappear; it should just front-load onto the safety alignment phase with massive punitive stakes. The real legal precedent here will be whether lack of control is viewed as a technical limitation or a negligent design choice. If you build a product that is inherently uncontrollable, "I couldn't stop it" sounds less like a defense and more like a confession.
reddit Viral AI Reaction 1777023132.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohyyv9k","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_ohzmxky","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_ohyzyxr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_ohzd9v3","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_ohzjtke","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]