Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's a [non-paywalled article](https://thehill.com/policy/technology/5686657-senate-passes-deepfake-bill/) on the subject. Cool. Good to give victims a legal avenue to sue when they are violated. However, seems like it should also go hand-in-glove with legislation to hold companies making these tools accountable for not doing more to prevent it in the first place. Yes, I'm aware you can run some of these models locally outside of any company restrictions, which is where allowing the victims to sue individuals will be helpful. I don't believe that should absolve the companies from bearing any responsibility at all if their actual platform is being used in this manner, though. It's especially indefensible when they don't prevent it from being used on photos of kids. Looking at you, Elon / Grok.
reddit AI Harm Incident 1768338610.0 ♥ 83
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nzfpjh2","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"fear"}, {"id":"rdc_nzf7etr","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"rdc_nzgxjfl","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"rdc_nzg537e","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nzgdpha","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]