Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question becomes, what criteria is it using? For example if the AI was supposed to favor people with higher income, the AI will inherently favor White and Asian people. Not because of the race, but because white and Asian communities have higher income on average. It's doing what it's supposed to do
youtube AI Bias 2022-12-20T17:2… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw4NyqIazFKu9_cWB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaHSfxmsdRScBJy754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxH2725miPXV9U0d9B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBSwGie7TE64slGi94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxTzQMqsF80j8ZhDXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugyfa1xou0nu2qVGgTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwdpDpIjF1ypQh2Urx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzNJXVbo0-W6UfPXrN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugya73HB4i8M-YBmLkN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbvwFahsk3rAVs3xp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]