Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah, there's really not a good argument there. If the guy was shot by police twice despite never being armed, that would be one thing. But if he is getting in fights with other people, than the AI found someone who is likely to get themselves into trouble. Which is exactly what it's supposed to do. The discrepancy between levels of care when AI is involved in making those decisions is concerning, but there's actual data there to show that objectively sicker people are being denied care. You need to figure out if the AI is being inadvertently racist because of the data set, but there is something to at least deal with there. The guy getting shot feels like the software did its job. Fine people who are going to be involved in gun fights, and it found one.
youtube AI Bias 2023-01-07T17:1… ♥ 4
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugwb8XldhEM05TUgOzp4AaABAg.9kHbrjdDS2d9k_MBVnIGK5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwLsv1I35XtUlmMJSd4AaABAg.9kGDTpd3S9S9kH33mv4sFQ","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwLsv1I35XtUlmMJSd4AaABAg.9kGDTpd3S9S9kIHoRp8-ul","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwLsv1I35XtUlmMJSd4AaABAg.9kGDTpd3S9S9kINRG6Y0kl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzgytquxwcoNLqA-8N4AaABAg.9kFxUgznOFZ9kGImTeCS2h","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwoH7y_8ZWXsL_9BAR4AaABAg.9kEnyKQxReC9kF6xZogN-I","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyCRVdYI4N4BKnA4r94AaABAg.9kCaBaQBBDg9kaJyzpYpXr","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytr_UgyCRVdYI4N4BKnA4r94AaABAg.9kCaBaQBBDg9keEznbvERw","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgytTeUutp8xDE05MJl4AaABAg.9kCZh9sKwaF9kH_qx4XPCH","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgyaEf1pQyyiFQhWAP54AaABAg.9kBSFJCLCGV9kC9_NQaqBo","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]