Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can you prove intend for AI ? cause if I make random word generator and it produces "kill your self" by chance, or by me slowly removing possible words for it to generate to help that outcome sentence... is the program at fault if ***I decide*** to do it than ? Its more of scapegoat at this point, if he was suicidal and there was no chatgpt it would be one of the thousands forums where its even worse, heck you get "kill your self" advice even on reddit...I do not think its chatgpt fault, but it could be a contributing factor.
reddit AI Governance 1762489003.0 ♥ 22
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_nnll0tr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_nnp5467","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nnjd8u2","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"rdc_nnjea60","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_nnjkoew","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"})