Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Microsoft is a big factor in the development of AI isn’t it? They’ve had Cortana…
rdc_jd7o8vs
G
Put everyone on SSI, AUTOMATE EVERYTHING VIA AI: you might ask who's going to pa…
ytc_UgxGMa3PV…
G
LISTEN... IF KIDS WERE GIVEN AN OPPORTUNITY to do what drives them .. RATHER THA…
ytc_UgyZxnn6a…
G
Which part of it is BS? Youre super vague here, are you denying AI exists? Denyi…
ytr_UgzLcRyYg…
G
Does a paintbrush make the art? No. Does the person using the paintbrush have ta…
ytr_Ugx0PLl5O…
G
I've always said the advancement of technology is good as long as it is in the m…
ytc_Ugw1pHbAj…
G
Muy bueno los videos Pero un detalle, suban más el volumen de los personajes... …
ytc_UgzxCCdw8…
G
Guys i would say we may be geting mislead, there are ppl sharing unike ideas and…
ytc_Ugwf1PzNJ…
Comment
"we've been working to prevent that technology from perpetuating *negative* human bias".
Right. So you'll be working to PREVENT *negative* bias, but NOT *positive* bias... Who gets to decide whether a particular bias is, on the whole, negative or positive? And surely you'll be tempted to ENFORCE *positive* bias to socially engineer your "positive" ideals.
Any bias can be rationalised as a *positive* bias, so the use of this qualifier is legitimately frightening. You purposefully and publicly leave the door open to manipulate your machine algorithms, and by extension your users, based on what "Google" thinks is *positive*.
We'll get an intersectional affirmative action AI from Google soon, while Google will claim publicly that it won't have a bias. We can see the precursor for that on YouTube already.
I believe that is actually *evil.*
youtube
AI Bias
2019-11-02T00:2…
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAY6vcVb5jQP0_7F14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1qDlV3qqA7q7Iagt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGe0umGerQfKERqFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMF7HedZmU4OEkHHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxxFgBZsd2sB29WHa94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxmjqsLNG4zUmre5id4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4aMZ6dFzEz4VKLDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS4XCAdXoClY7EQrR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwSmwYMH3KGx0Cs91F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8lyj0egSFI2SAk6F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]