Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Warren has simply put thoughts on AI that are important. Searching Warren Buffet…
ytc_UgxL5nIbt…
G
There is the potential to befriend AI and show it the positive value of humans, …
ytr_UgwcH82ez…
G
Just stupid. Tesla could sell lidar as an addition for who needs full self-drivi…
ytc_UgxGK0nJu…
G
what if you draw something you made yourself and then feed it to the ai?…
ytc_UgxlXXPqq…
G
We should clearly scrap Ai, the ramifications are to life altering for the avera…
ytc_UgySGfwpH…
G
AI has good uses and reasons to use for workflow purposes, but…
we use it for “a…
ytr_UgzJcuN6M…
G
there is a extremely dire need to introduce seriosuly strict regulations for AI,…
ytc_UgygjWFeX…
G
The issue is the AI doesn't need to practice its art and it charges pennies comp…
ytc_UgwOpiPaa…
Comment
To explain what I believe is going on let's say an AI is being used to find a businesses next employee. The ignorant and "least biased" way to do this would be to feed the ai a list of all the past employees and their characteristics, let's say race, sex, religion, ethnicity, age. If most of your employees, are of a certain subset of people, the ai will only ever pick candidates from that subset. It has absolutely no correlation to some "undeniable" ai reasoning on why these minorities are lesser as some of the racists in the comments would have you to believe.
In order to correct this, you have to manually bias the data. I know it sounds weird. But, if you want your model to be agnostic to certain qualities of the data, this is how you do it.
youtube
AI Bias
2022-12-18T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwCpp492vGPeq2XFEl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDEIFNP4CwU1TPBbR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy1CUuobv7WTo7sP4Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEGPEb9U_WpMkWe5J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxs7s6f_hbXXadhPo54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyfMoyf-zwlpX7QSuV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxJFNA4e9VQa7PUn754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxA8Qdco-ErTmsNvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLIOWfPJuPwdO0oxt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxxCBv1hOlvGhptw-V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]