Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a lot like our homeschool! This is a school I can get behind.…
ytc_UgxHY70qN…
G
This is why I use "LiNkY tHe BeTtEr ChArAcTeR aI wItH nObOdY kNoWiNg WhAt YoU sA…
ytc_UgzTMbCqN…
G
@markcrawford5810Taking an artists unique style and feeding it to an ai is inde…
ytr_UgwPTcbY-…
G
I feel bad for those AI creators, getting treated like an unhuman person, and th…
ytc_UgzrA5RDP…
G
The key issue seems to be that ChatGPT is a total Yes Man. When asking if you ar…
ytc_UgxgbvcLh…
G
Most pop music are just repeated using the similar chords though that's why it's…
ytr_UgzULmXSN…
G
We're starting to see deterioration in AI images because they're feeding on publ…
ytc_UgxSJWW5x…
G
lol AI will 100% be able to create that painting. We are acting like in the last…
ytc_Ugwn72Me5…
Comment
so how do you eliminate the human bias that controls the moderation of the machines human bias? doesn't seem to be much help the "limiting of offensive results" only removed "offensive" opinions that google doesn't agree with, either manually or through new human bias influenced machine learning. opinions like that of the man who google recently fired for questioning google's current stance on workplace sexism. even if you agree with google for this example, there could be anything that google finds offensive that you don't. if the only information available is the information not censored by google, whether or not you think that the results would be in your personal favor, the control over what opinions people have access to should be the right of no person or organization.
youtube
AI Bias
2017-09-08T22:2…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzlllO_5dApu_u1fzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyhW6a7nhs5Eenu4fZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdRhIPvr1v9Y0a0PV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxmxk0NBv8WRZmXRIB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNC1lufUTKzRJRvXd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkaAGLi4cd1cmhZmh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwB8UU3IEk3wxgi9DJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOxXDHijwz3eTGELx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc7Uru4h_CTxKEzyt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw0VU55OAYT1tpt4FB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]