Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai hallucinates all the time though. Its a great tool, but will screw you over i…
ytc_UgzfIXQJR…
G
Other countries arent going to stop AI development on this same data. It will ha…
ytc_UgzZWrh8-…
G
I think AI still has the limitation because it is not "real" as in not directly …
ytc_UgxyeCPmd…
G
When you get into photography its amazing how much personal style matters. You c…
ytr_Ugzy0k5YF…
G
Someone is going to let it happen, its important to learn about what AI can do r…
rdc_jcc3331
G
I don’t think AI taking jobs is the worst thing for a society. I think it’s the …
ytc_Ugw1Kiymk…
G
What do you expect from A.I. when it is designed by sociopaths and megalomaniacs…
ytc_UgxQzHqNI…
G
Can't wait for real AI doll, with selection of programmes, no more wife or narci…
ytc_UgxgKWQ-7…
Comment
I don't think racism is a particularly big threat in American society, but this is different. When a person is racist, it's pretty obvious and we can devalue their opinion accordingly. But if an AI happens to be racist, we might not even realize it. People tend to trust AI and rely on its conclusions because we ask it questions we're familiar with and it gives increasingly good answers. But this causes us to have massive blind spots to questions and situations we wouldn't even think of. The need to pay attention and stay vigilant and continually question the reasons for AI's decisions is incredibly important. The ancient Egyptians conceptualized attention as a higher-value trait than intelligence, and there is a case to be made for the continued relevance of this idea even now.
youtube
AI Bias
2022-12-18T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyY02vN9bm36roekix4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw7GnKbXbloOuTBO2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUqJEiHJtrGCjai0t4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugynh6LH0k94tB4ZltV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeXPAq9ezentfQ_KJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2yEq91yie1t61suF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQIbAgrbltlzLoyxt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzVeKnn7l_a9lsef0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgypWydNKlWLek50Sh54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzw92vMBPvTdp1uomp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]