Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ask yourself this. If you don't know the first thing about AI why are you commen…
ytc_UgyxfYYis…
G
God damn... that sucks. You got college debt for a CS-like degree that don't eve…
ytr_Ugw6p5xbK…
G
Wiseman once said, "I am as confident that AI's promise will outweigh the peril …
ytc_Ugy4Qh1O6…
G
@upariverwithoutapatel
AI could do that as well. They could tailor to each indi…
ytr_Ugwl2Vg4B…
G
You didn’t sign it with your physical hand tho. It’s not possible to sign a pape…
ytr_Ugy4t31-0…
G
If you have a incredibly specific request of our good AI dall e and midjourney, …
ytc_Ugx0XBRi8…
G
Why do we need dat sh anyway that's what I want know, if want to use robot for w…
ytc_UgwLIW3hD…
G
Yes. I gotta act excited about AI and use it extensively while the new workflow …
rdc_o8bcigp
Comment
I know this guy used actual faces of real people for this stuff, and that's incredibly problematic...mostly for children, but adults are victims of this too. Dude should rot.
But the conversation of 100% "this isn't a real person" A.I. generated pornography really needs to be had and it needs to be understood. There have been people who've suggested how A.I. could be used to address pedophilia and even treat it, and I think it's worth examining like crazy to understand if A.I. could make things better, or make them worse.
Here's the for-instance: Some person, who has never seen child pornography, has never assaulted a child, and has never really made any sort of plan to put themselves in the position to do that...they realize that they are attracted to children but they're terrified of all the things that can happen, from harming a child to severe punishment - if they were to explore any of it.
How do we make sure that this person doesn't harm others? If they see a therapist, there's not much research that says that they can be "fixed." Voluntary castration (chemical or otherwise) seems a bit less than ideal, especially for a non-offender.
Does A.I. offer a potential treatment here, or would it just make things worse?
Like - would giving someone access to 100% A.I. generated media of children that don't exist...would it satisfy any urges and keep society/children safe from them, or would it just make them more eager to seek "the real thing?" What about if A.I. progresses to the point where we have Artificial General Intelligence - robots - that could fill this role?
I just think that there are probably a number of pedophiles out there where if we could magically know the real number, it would make us very uncomfortable. I think a number of these people have never offended. Is there a way to use AI to keep kids safe from them?
reddit
AI Harm Incident
1730123912.0
♥ 19
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lu68v8o","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_lu5vzkj","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_lu6d6l6","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_lu6870d","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_lu6mzxq","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]