Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, who was the first to start using AI for brain surgery when they are still on…
ytc_Ugz1OsP9B…
G
How could u regulate it if Russia developing their own AI?! They have their own …
ytc_UgyX59aNP…
G
Between the JAB and AI Pinky and the Brain can now take over the world!…
ytc_UgyCMOX3D…
G
People like talking to others with similar ideas so the AI was trained to respon…
ytc_UgzINbIiI…
G
And I wonder how many of you invested, or support, the HUGE DATA CENTERS being b…
ytc_Ugw5fUBr3…
G
Don't worry, we understand that interactions with AI can sometimes feel unsettli…
ytr_UgzASd8Ql…
G
I highly recommend Clever AI Humanizer, it is100% free. There are also many inte…
ytc_Ugz0q_G6-…
G
I am happy when ai becomes super intelligent.... Then it can finally put things …
ytc_UgxZmRPAW…
Comment
Unfortunately the “predictive AI is problematic” has happened many times. For instance, a program designed to predict child abuse/neglect somehow only featured poor families. Why? It only drew data from families with government insurance. Biases like this are often overlooked by the people trying to promote these AI programs, who often say it’s “more fair” as an AI is making the decision, not a person. In reality it makes problems we already have (like child abuse/neglect being ignored in “rich” families and over-reported/acted upon by poor families) and gives them validation instead of fixing them.
youtube
AI Bias
2022-12-14T00:3…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz4D77NgJ4tBUu1Sbt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwnintqsVHcSrX0emh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9enLQMBMOi9FXBgt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx193sEaIW7s8MBS0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzoMn7rCAttMj-6994AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzypZgsv-9bpJaEAWV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEH91dhxYYF-gX2Gp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRHaTHvu8f02TMmxV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugym-8ujIaUbx5iQelx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPFKApQHl2OnGbmv14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]