Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I guess my question is, how do you prove—without a reasonable doubt—that what is subjectively viewed by the user as an underage image, was the intended outcome? Personally, I believe the dataset would have to be regulated by the company providing the service. For data models requiring third party input (the user), their data (images) would then need to be passed through a filter of sorts. If something bad skips through, it’s on the business to correct it and be held liable for reporting it. There are a lot of ways to cut this, but I think that would be a good cornerstone.
reddit AI Harm Incident 1695571750.0 ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningunclear
Policyregulate
Emotionfear
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_jwuwa7n","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_k0ftqv3","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_k20bdf5","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_k6b47sh","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_k6d4lkn","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]