Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its not hard to add crypto signatures into devices, that are able to verify the image integrity and ensure it hasn't been modified. This is the same way that code-signing or any other cryptographic verification would work (along with its' pitfalls - eg, digitnotar) Then every service can display or check that the image has been modified/and or generated not from an original source. If it doesn't have a verified crypto signature, then you can assume it's not trustworthy. You could even chain signatures so editors could "edit/enhance" an image and include their signature so you know "canon X took this image, and then org XYZ edited it, and facebook striped tags". A full chain of edit evidence. It's simple, easily added to devices, and would snip all this issue with "AI generated images" in the butt quickly, easily and with very little end user impact. Heck, facebook and similar could just refuse to accept images without an appropriate approved signature. Obviously state actors could still probably get around this, but for your average "revenge porn" scenario depicted in this article, it would prevent this ever becoming a problem.
reddit AI Harm Incident 1670666332.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_izmub9o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_izks94k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_izld4i1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_izmka4h","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_izn607s","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]