Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Terminator, The Matrix, I Robot, all of which were trying to get a serious p…
ytc_UgySfkfyJ…
G
UNIVERSAL BASIC INCOME funded by AI wealth and productivity gains are the only s…
ytc_UgyDBJRvb…
G
Remember back in the day.We were taught in art schools to actually copy All the …
ytc_UgyYIwwyx…
G
I agree with the basic premise that students are now relying on AI coding to do …
ytc_UgzkfnAl-…
G
Its saved in the clouds and if u tell AI u did something illegal they will repor…
ytc_UgypcKLL4…
G
crab-cat 😅
that A.I. part with the car driving looked more like a terminators fi…
ytc_UgwFL6jvp…
G
From what i understand automated trucks will mainly take over instersate driving…
ytc_Ugx1EGtTb…
G
There is a fundamental flaw in AI staffing replacement. Get rid of employees ce…
ytc_UgyfB9d6w…
Comment
Just like any other human invention, AI isn't racist - a tool isn't good or bad, who uses it and how can be good or bad.
Humans, all humans, have biases. It is literally impossible for any member of the human race to create a completely unbiased tool, because even in the process of removing biases those who do it will be biased in which biases should be removed or prioritized.
All that a conscious society that aims of do "good" can do, is mitigate biases - if a tool is biased in a way that we don't want it to be, either change it, or don't use it.
As many others have said, it's far more concerning that some of the solutions mentioned here are even used in the first place, more so than if they "work well" or not. Predictive policing, medical triage, work, living conditions, should all be strictly legislated and not judged based on biased minds, be it directly from a human or indirectly through the systems and tools they use.
youtube
AI Bias
2023-10-13T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzDO9VdavBiSAQN1oJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwfakyCTJRdqbOXeL14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzEoMiqBATNnU_oxCx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgykGOySTKucJw6eUGh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzV9EQWr7UOTx6CJ294AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0gDvFLI-rJzFNrsB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwumcHZlgE0uaw6bAN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzIVacxizaQ_u7twlF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJohP31HfzvPp2AUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw1emk2h_gZUVCLPyV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]