Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like AI
Like the one that detects cancer and the one in alien isolation an…
ytc_UgybWhjDO…
G
Both can be true about good effects in AI and negatives . Scale of each scenario…
ytc_Ugw5cmG_X…
G
AUTOFAC by Philip K Dick. The AI super factory kills off all its potential custo…
rdc_oh37mju
G
Imagine us Human being Scared of a robot? We have God and Weapons Guns and Milit…
ytc_Ugxypj_c1…
G
@arthursmith3401 Thank you for your comment! I appreciate your concern for my we…
ytr_UgxHesJ9r…
G
HUH?! They sound so cool... My art teacher last year had us "make a comic using …
ytr_UgyBU1u8h…
G
That's how things happen.
People who say they are not gonna buy from Amazon, wo…
ytc_UgxkT-TKx…
G
Another difficulty with AI existing is knowing what's real and false online. For…
ytr_UgxeNpTke…
Comment
This is not biased data sets, it's that the data available reflect the reality that the data comes from. People of colour often wait longer to go to the doctor (because of cost) so the data that the AI is trained on is very good at predicting people of colour who are sicker before needing care. This error arises because we are trying to match AI models to reality instead of to the ideal states. The biggest problem is in some of these areas your ethnic background matters (like in the medical field) and so you need to improve the current medical system to then get the data that you use to train the model of AI. This is where countries like Canada are so great because not only do more people go to the doctor sooner they also go more often. You can take a patient's historical data and feed it in and then use it to predict the future illness (which you can already be aware of.)
youtube
AI Bias
2022-12-23T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy4KxxlERLxFGZMUWh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUcODYWYp-tY58Io54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzFUTkKwQHGBKpm6sF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx9DeRcqUWLuH0dYJR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzvbpkVuvhybFwv6W14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgztGp6E7FEMmR0KZIh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugwl8tB54vBUAuDJu6x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzceN63p7Wjwiatmyl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzA57hfAzVHCz2j3RB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxqphEyrIMxHIdGY1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]