Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@bibblethealmighty oh ok. Also, there's the program to poison my artwork. I want…
ytr_Ugy-978dO…
G
Wtf why is facial recognition technology more likely to "misidentify minorities…
ytc_Ugxo8LrHB…
G
LOL Yeah, let's see how fast a robot can pick something out of an overfilled bin…
ytr_UgyPfb4Zj…
G
Another question would be "If AI takes all of our jobs...how will we afford livi…
ytc_UgyN2CbiZ…
G
The punishment Daan saw what humans saw and how they would react not solely on t…
ytc_UgzVOncLE…
G
If anyone seen all of my characters on character AI no living species will be on…
ytc_UgyWdJqeZ…
G
You got that from the AI generated synopsis, didn't you Dan? I'm sorry Dan, you …
ytr_Ugx0HmtYf…
G
I haven't drawn in a long time, but I'm going to learn how to draw better just t…
ytr_UgzaYQ9fM…
Comment
"Instead of getting distracted by future risks?" Are you kidding? You are trying to distract from the existential risks so you can get airtime.
Of course current AI has problems. No one who is trying to raise awareness about the non-intuitive dangers of AI would disagree. But you are pretending that AI is the same kind of tool we have always made, when it is definitely not. In under a decade we went from 'it will be decades or centuries before machines will out think us' to 'in a few years machines are likely to out think us'. They are not slowing down, in fact, they are already using their superior intellect to design improvements in themselves we couldn't think of.
youtube
AI Responsibility
2023-11-06T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAL0ukzbZEdTZPQtx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzfXn21BiurL1tmsaN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwqts5VyG_N63DRGOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxATXJqvYGw0mllYz94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzj1-gEV1upbIGfUiJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9_f0tMSf0H1-iO9t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGZ-KfndIdSmbe9HR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxutE1MxXKGjVrKaaZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwwIuvNe3ezMXQ8-jV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz0Bsx5UBPoY5n2SRV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]