Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Start calling them “ai *users*” instead of “ai artists”
They don’t deserve even…
ytc_UgztRy_GW…
G
I believe you could arrange to setup an interview with ChatGPT. It would be a f…
ytc_UgwE6iiJH…
G
Self-improvement is kinda the definition of AI. All the real experts were asking…
ytc_UgxTEQFYP…
G
I mostly see issue with people selling or posting ai pics.
I'd love to get rid o…
ytc_UgwCoQyJy…
G
well I called it. when this AI thing first started I said that it would not be l…
ytc_Ugwjiy5BI…
G
Great video. I'm here because you appeared on the Daily Wire (https://www.yout…
ytc_UgyG5SSWE…
G
If the technology exists to make a toaster than very soon someone will invent t…
ytr_Ugy8ADoU7…
G
That pro ai artist has to pick up a pencil and practice for as long as I have. N…
ytc_Ugy8WMB4q…
Comment
Thanks for the video. I have heard for a long time already about "what self driving cars should do if..." dilema, but never out under the ethical questioning. The part that was never mentioned to me is that humans are absolved of responsibility for any reaction, but algorithms are predetermined.
Aside from everyone claiming that the thought experiment would not happen in real life (and I get that), I still appreciate the questioning.
But come to think of it: the problem applies to any automated task with moral implications, right? Human reaction vs. algorithmic determination is not new. How are we currently dealing ethically with, let's say... automated battle drones? Operating nanobots? Damn, one could even stretch this out to discuss about electric fences.
youtube
AI Harm Incident
2016-12-18T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgiDRHNP6Ll3F3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiIYchWvUGckHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjiR5ifVgu5L3gCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgiJaxBMly9MvXgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghSuhCsL9iAHXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghAA7dcebmab3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UghAsDUNhcPf4XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi_4HU5JSF7SngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugiufh1PTT6cmXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UghXJhtXibHvFXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]