Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is to the point where it is trivially easy to make, and for the foreseeable future there will be a group of people who will want to dehumanize and masturbate to a face/body they see a lot. Perhaps a tech solution is to combat the ability to acquire the images? Something undetectable to humans but whenever it is fed into AI, they are messaged not to use it or it is camouflaged from use. [I know there are programs or something already in place to help try and fool AI.](https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/) Something where we see a picture of a hat, but to AI it looks like cake, and the more it looks at such images, the less competent the AI becomes. The issue I see is keeping that hidden poison through tansfers and re-encodings. If someone records their screen while playing the tiktok video instead of downloading it proper - does the poison remain?
reddit AI Harm Incident 1716386134.0 ♥ -9
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_micw43u","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_l56jcwk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_l5a8j26","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_l565keu","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_l56dyfj","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]