Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@rongooden6545 Yes but you need to remember, this is the worst this technology w…
ytr_UgyBJgqTr…
G
Years later and AI probably gonna be sentient enough to get arrested by police b…
ytc_UgxEXuhtx…
G
This is Roko's basilisk. Not what the programmers think, but the harm those pro…
ytc_UgzSdZ0dy…
G
I completely agree; AI is very dangerous for the mind, language, and brain healt…
ytc_UgyRGvD3W…
G
This is only CGI for graphics expert's boredom. Don't fooled with that. Unless, …
ytc_Ugy5su-_D…
G
People always need to be threatened from something very big......this is BullShi…
ytc_Ugz_48s9M…
G
Self aware AI shouldn't be developed to begin with in my opinion, since the poin…
ytc_UgiC1pPPo…
G
Humans have the knowledge but we also have empathy, emotions, feelings, spirit a…
ytc_UgzLR49US…
Comment
Bias in the Machine: The Inheritance of Inequality
At first glance, AI systems may appear neutral, even objective. After all, they rely on data and logic—surely a computer can’t be racist, sexist, or discriminatory. But in reality, AI systems often reflect the biases of their human creators and the data they’re trained on. The myth of AI impartiality is one of the most dangerous misconceptions of the digital age.
AI systems learn from data—massive datasets gathered from the real world. But the real world is messy and unjust. Historical data often includes the imprints of social inequity: discriminatory hiring practices, policing patterns influenced by racial profiling, gender disparities in income and healthcare. When AI learns from this data, it doesn’t just learn facts—it learns patterns, and those patterns can encode systemic bias.
youtube
AI Governance
2025-10-03T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyov9ToiRlge25Zd7N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwU8sFWJQe3FuRsADF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTRIgEFbBckisPcxx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjvlaqHqjBhj470pJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzQ8TiI6_2BNii7tBJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqQr9BKByFFrhGzI54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXZhLI0v_1pg3NG754AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwIjUqtuIlebIlM8GN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzqWn1mGZVjrG6lYi54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxjn6_n_AWffOR8Tq14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]