Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, the guy glazed over some crucial information the AI gave him. It's user er…
ytr_UgwrMzrM1…
G
The only intermediate solution is by being a freelancer and develop something us…
ytc_UgwvQ46fg…
G
Haha, that’s a fun thought! Who knows what technology will look like in 2089? It…
ytr_UgzALA1w9…
G
AI video about rats in dollar general… DOLLAR GENERAL was spelled wrong 🥴 DOLLR …
ytc_UgwETkBzm…
G
Bruh there is only one existantial threat and thats humans. Can't wait til we ha…
ytc_UgwZzfZcy…
G
They are using ai predictive policing to terrorize adults as well and make terro…
ytc_UgzKdFzOW…
G
I think the ai though priceless like free so I'm not sure but it's nice…
ytc_UgwMuTeJ9…
G
Yann LeCun: we design laws that prevent people from doing bad things... and of c…
ytc_UgxmH-ZSO…
Comment
Its not hard to add crypto signatures into devices, that are able to verify the image integrity and ensure it hasn't been modified. This is the same way that code-signing or any other cryptographic verification would work (along with its' pitfalls - eg, digitnotar)
Then every service can display or check that the image has been modified/and or generated not from an original source. If it doesn't have a verified crypto signature, then you can assume it's not trustworthy.
You could even chain signatures so editors could "edit/enhance" an image and include their signature so you know "canon X took this image, and then org XYZ edited it, and facebook striped tags". A full chain of edit evidence.
It's simple, easily added to devices, and would snip all this issue with "AI generated images" in the butt quickly, easily and with very little end user impact. Heck, facebook and similar could just refuse to accept images without an appropriate approved signature.
Obviously state actors could still probably get around this, but for your average "revenge porn" scenario depicted in this article, it would prevent this ever becoming a problem.
reddit
AI Harm Incident
1670666332.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_izmub9o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izks94k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_izld4i1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_izmka4h","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_izn607s","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]