Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Muy interesante pero un robot 🤖 se vería mejor mecánico sería mucho más aceptado…
ytc_UgzwbJagf…
G
I imagine enslavement by AI would be far better than by humans, I mean if you lo…
ytc_UgwODlqXn…
G
Although it is very possible that AI technology could be very dangerous and poss…
ytc_Ugw3Cvcha…
G
I got a similar image and then asked it to explain. After it gave me a dramatize…
rdc_oi1bpdv
G
Assuming that these AI agent are exponantially imporving. It doesn't seem like i…
ytc_Ugx0t3mHH…
G
I Did The Right Thing I quit AI Art because I want to do my own art and be able …
ytc_UgxfCqAGA…
G
For AI to become AGI it has to stop facilitating human conversations and acting …
ytc_Ugyl4NFVV…
G
I COMPLETELY agree with your point, especially being an aspiring artist myself. …
ytc_UgwdD6QGB…
Comment
If you're training your model on what a physicist looks like, and you use a training dataset of past physicists, you are training a model on what past physicists looked like. There is no bias there. The problem is training data that does not match the question that is trying to be solved. Certain people are using these problems as a justification for interjecting *actual* bias through human intervention. It's dishonest and anti-intellectual.
Honestly, this is machine learning 101. It's just issues of overfitting and underfitting. Poor data sets. I guess they've started to call it "bias" so that they can claim a moral high ground while controlling outcomes to their liking. That's what they've been doing by the way.
youtube
AI Bias
2019-10-24T15:3…
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAY6vcVb5jQP0_7F14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1qDlV3qqA7q7Iagt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGe0umGerQfKERqFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMF7HedZmU4OEkHHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxxFgBZsd2sB29WHa94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxmjqsLNG4zUmre5id4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4aMZ6dFzEz4VKLDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS4XCAdXoClY7EQrR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwSmwYMH3KGx0Cs91F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8lyj0egSFI2SAk6F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]