Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This question assumes that humans are somehow different than 'mere tools' in such a way that 'human standards' of ethics are valid. Human standards are bullshit. I've yet to see a proof for 'free will'. Why would anyone assume that consciousness somehow surpasses the laws of physics? Edit: >Humans are self replicating biological robots. If we were a higher level species we would never make such dumb decisions like making killer robots that kill us. *-DISQUIS User USAMEDIALIES* If we're robots, then the definition of 'responsibility' necessary for these ethical standards in question should apply to other robots. 'Robot' is defined as "a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer." 'Machine' is defined as "an apparatus using or applying mechanical power and having several parts, each with a definite function and together performing a particular task." Humans that 'violate human rights' are considered to be 'broken humans' by the humans who hold the concept of 'human rights violation'. Because of the presence of the word 'violation', or an event which fails to meet a standard, one who holds the concept of 'human rights violation' must agree that it is 'wrong' for a human to be 'responsible' for certain actions. The humans in question are 'sub-standard humans'. If humans are robots, then the concept of 'sub-standard human' is identical to the concept of 'sub-standard robot'. Those who hold this concept must define a standard for the behavior of all robots. Biological robots are no more 'responsible' than other robots- all robots are systems of data transformation.
reddit AI Moral Status 1429569703.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cqjacgd","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"rdc_cqikxcw","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_cqj083t","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"rdc_cqisk6b","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_cqipxsl","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]