Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"we've been working to prevent that technology from perpetuating *negative* human bias". Right. So you'll be working to PREVENT *negative* bias, but NOT *positive* bias... Who gets to decide whether a particular bias is, on the whole, negative or positive? And surely you'll be tempted to ENFORCE *positive* bias to socially engineer your "positive" ideals. Any bias can be rationalised as a *positive* bias, so the use of this qualifier is legitimately frightening. You purposefully and publicly leave the door open to manipulate your machine algorithms, and by extension your users, based on what "Google" thinks is *positive*. We'll get an intersectional affirmative action AI from Google soon, while Google will claim publicly that it won't have a bias. We can see the precursor for that on YouTube already. I believe that is actually *evil.*
youtube AI Bias 2019-11-02T00:2… ♥ 27
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxAY6vcVb5jQP0_7F14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy1qDlV3qqA7q7Iagt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGe0umGerQfKERqFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMF7HedZmU4OEkHHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxxFgBZsd2sB29WHa94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxmjqsLNG4zUmre5id4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz4aMZ6dFzEz4VKLDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyS4XCAdXoClY7EQrR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSmwYMH3KGx0Cs91F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8lyj0egSFI2SAk6F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]