Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's a tough problem. I'm guessing they were trying to avoid Gemini reinforcing stereotypes (e.g. "show me a manager" always generating men, "show me a criminal" always generating black people, etc), but there are situations where it's inappropriate to inject diversity (e.g. "show me a Nazi soldier", "show me the founding fathers", etc). It seems like they need to make an exception for images based on history.
reddit AI Harm Incident 1708857235.0 ♥ 276
Coding Result
DimensionValue
Responsibilitycompany
Reasoningutilitarian
Policyregulate
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ks3an6g","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_ks1u1on","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"rdc_ks20bks","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_ks2msso","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_ks2g2pg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]