Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's pretty new. The ease of doing it, the qualify of it, and the fact that it's fully AI, so less obvious to tell it's fake. Previously you could tell it was an actresses head copied onto a known porn pic or something, or the wrong body. This is trained on real pictures of her, and harder to differentiate. The other HUGE difference is that those pictures from the 90's were relegated to dark corners of the internet, only seen by those few who knew they were there and sought them out. These are being spread all over the place, on big social media platforms that "regular" people use. Essentially the digital difference between a picture in a dirty magazine, versus one spread on billboards all over town.
reddit AI Harm Incident 1706221593.0 ♥ 173
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kjm2n1c","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_kjkkmkt","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_kjl13qc","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_kjkatpw","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_kjlj04e","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]