Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
FAKE AF The poor makeup does not hide her being a real human. There is NO AI on …
ytc_UgyqfcI6H…
G
He makes AI ? Smh. Dude is intelligent but come on now, more like a team of engi…
ytr_Ugx1RIXXY…
G
Hi Shaun, we are sorry to say that you got the wrong answer but in any case, the…
ytr_UgzwZnlYW…
G
AI "art" is just fake, soulless, and it lacks the inherent meritocracy and human…
ytc_Ugx8KKymF…
G
Exactly, what would do when his Ai will find some truth he does not like, like t…
rdc_jhd64xu
G
Creative work cannot exist without an actual creative person behind it as far as…
ytc_UgzCaFI57…
G
Regarding white-collar- work: There is no proof that you won't need humans to ru…
ytc_Ugwtidvgw…
G
It was too late long ago. Well by that I mean surveillance like this and facial …
rdc_eu6g7dl
Comment
This is radically over-simplified; understandably so, since its a short video. But honestly, I feel this video is doing more harm than good by fear-mongering. Maybe thats not the intention, but thats the inference I made.
The problem is that the programmers are not hard-coding in "ok, take out the dude without the helmet because its safer." Thats not something youll find in the code...on any level whatsoever. If such an outcome occurred (which it most likely wouldnt if both the car and motorcycle were self-driving), it would be done on the conditions that the car was attempting to avoid the accident all together (ie it swerved toward the motorcyclist because its smaller and more likely to be missed). Youre not going to find moral decisions in self-driving cars, only code whose singular purpose of existence is to avoid the accident, period; regardless if the accident could or couldnt be avoided. Thats no different than a human being with perfect (or close to perfect) reaction time.
youtube
AI Harm Incident
2015-12-09T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggTAra7ykO18HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiiIRzPV-PDJngCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UghNHFfbScHAI3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Uggs6xSxQV1idHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh555atHjwB23gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjGZiL-RQWZh3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugj_T2kb-3J5iHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgglL4SDgYq70ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UghQrXYx4XEWV3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Uggozw99vhiuyngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]