Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People feed AI. Simple as.
Earlier this week a 12yr old was almost arrested in …
ytc_UgyY-lawq…
G
I’m writing a book with my own ideas and my own words, I just sometimes feel lik…
ytc_UgyAgiYXO…
G
How about when AI understands your whole business then someone can more easily s…
ytc_Ugxhy38Py…
G
basically,
This might just lead to govt demanding more taxes from corpors who wo…
ytc_Ugy8ttLCC…
G
Doctors have too much pride to ever come to terms with what is going on. Unersta…
ytc_Ugwu-JFTo…
G
I completely agree with you, it’s amazing to see how far people have come with t…
ytc_Ugz3oC66M…
G
@leangrypoulet7523 automated systems have serious running and maintenance demand…
ytr_UgyYb90_s…
G
That's also not how artists use AI. The only people who press a button on an AI …
rdc_n3y53gc
Comment
I guess my question is, how do you prove—without a reasonable doubt—that what is subjectively viewed by the user as an underage image, was the intended outcome?
Personally, I believe the dataset would have to be regulated by the company providing the service. For data models requiring third party input (the user), their data (images) would then need to be passed through a filter of sorts. If something bad skips through, it’s on the business to correct it and be held liable for reporting it.
There are a lot of ways to cut this, but I think that would be a good cornerstone.
reddit
AI Harm Incident
1695571750.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:06:44.921194 |
Raw LLM Response
[
{"id":"rdc_jwuwa7n","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_k0ftqv3","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_k20bdf5","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_k6b47sh","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_k6d4lkn","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]