Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Enough of this BBC nonsense. As an Indian deeply rooted in tech and farming, I u…
ytc_Ugy-btnuk…
G
11:03 Exactly—These AI bros are removing the purpose to enjoy life. Why they cho…
ytc_UgyB41Nv9…
G
AI replacing human workers will just put to bankruptcy all business firms and go…
ytc_UgxPWg9YT…
G
@topin8997 Well fair point, atleast the video IS targetting AI that is trained …
ytr_UgyWpbDm_…
G
AI is great at noise though lol. This ain't my main job, I'm cmo for a Saas / ed…
ytr_UgxsY8Zvt…
G
heres what should happen, these CEO's want a world with out us, and we want a wo…
ytc_UgySkfYWH…
G
I have a feeling that AI will be used to commit horrific atrocities against huma…
ytc_Ugxyrhxh7…
G
AGI is 50 years away. LLMs will never lead to AGI; they can't reason. Think of L…
ytc_Ugy_UUxpO…
Comment
What I got out of this: This talk fails to acknowledge that A.I. is already being coded to reflect the political and social engineering biases of the humans that create it (e.g. the bots that Facebook routinely deploys to look for ToS violations), that those biases are more Left-leaning than Right-leaning, more authoritarian than libertarian, and reflect its own sorts of prejudice. While she worries about results being "too white", many of us are left rolling our eyes when we get a social media warn over something that was clearly a joke that the enforcement bots didn't understand.
youtube
AI Responsibility
2023-12-27T23:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx0qKsFULOndt_U5wF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwE1scVjZgJ5TXDR154AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwU862zyjIm5GGYxn54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyDamW5u2ScgC8juh94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzSwdbGP3DP3afLTmF4AaABAg","responsibility":"society","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzIB-ShR0rb8FCAqtN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxg-RvI2-dkZXb4Br54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwNF_CgfRUTSZBoGZ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_b3LYcP2Xre4bZeh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwpe4ZnICoqyIwZJZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]