Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Just because something is based on data, doesn't automatically make it neutral." This from the DATA company that fired a guy for actually using DATA. Yes Google, we know the kettle is black. Do you ever think that maybe if humanity is searching for "inappropriate" things that maybe they aren't actually inappropriate? Just a reflection of our own humanity? As George Carlin once said "You put [trash] in, ya get [trash] out." I'm sorry humanity is what you want it to be, but facts (by definition) don't lie. Facts don't have a stake in the game. They are, by definition, neutral. Maybe I'd ought to give Bing a try, at least they had the balls to try something risky, even if it meant the data it was being fed wasn't what we wanted to here and it turned into a hateful, bigoted AI. That actually says something meaningful about us. That all being said, Google does have a point. What we as humans create, does reflect on our own thoughts and perspectives. That's why we as a people have to learn how to take in that information and understand why it was said, how it was said, what it meant to them, and what that means to us. Only then can we as a people decide for ourselves what to do. But we can't make that distinction fairly if the data is being censored.
youtube AI Bias 2017-08-26T10:3… ♥ 9
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz6tpnRDwDgaqlVrkZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwaLC2iCTvV9auByTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwVJ9pJ9iCCq19lXpN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxUF0NgMk2XbjcOTLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxllLQiKMOsUQd4s1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwsA1foOtxrFwSohqN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxPismFLqwGv1IlNE14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6x93AZZK_5qAufQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw74hre1_pHqH3V0JV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmIGscl53btE5R0OV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]