Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The english speaking world should accept Ai as the abbreviation for artificial i…
ytc_UgwNo-P3L…
G
> people who enter their private property only
Next on "Walmart Watch," targ…
rdc_fvz0juc
G
He is not getting destroyed, people who copied his work are the proof that ai ar…
ytc_Ugx4l97-G…
G
Who is here now in the future That it was reported that AI now knows how to code…
ytc_UgyJaqkn8…
G
You are still drawing it yourself. Digital art is and has always been real art. …
ytr_UgwHsQeDT…
G
C'mon, he aint 'Godfather of AI'.
He's just highly exaggerated and overrated by …
ytc_Ugx70EJUl…
G
I like the part when people clapped when AI took advantage of art work and huma…
ytc_Ugxrijh-B…
G
If AI superintelligence is really smart it will first of all kill those that are…
ytc_UgzITUVeO…
Comment
AI can't do emotions and humans are very emotional. If you had AI use ethics to determine who gets to go to college it would pick the riches healthiest kids because they have the highest rate of success. They are also pro-eugenics. Favor animals over humans and a bunch of other horrible stuff because they are looking for the most efficient solution. They can't calculate emotion or comprehend free will, they just see numbers and statistics. It's why Ethical robots were determined to not be ethical years ago. Why we still doing this?
youtube
AI Bias
2022-12-29T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyZJDWPE71x3hNbDzJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxAR1cTpsVQsNC9oEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwt_Yfedb_DR3gNE3p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxMRBgUCmoeS7gaefJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZtOdRBkvmXRygIzd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIsx6R2fnWq5SbP4d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyIEigtpV_FIQZZaOd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylAdK7NHbhx9hwEhV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyQEmCHOjUGy24EfJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzER5Ml5g8W2zBeZLB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]