Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Geoffrey Hinton is exactly correct about instilling maternal instincts in AI.
…
ytc_UgzI9_sfk…
G
Who says AI can’t pay electricians an absurd amount of money online through digi…
ytr_Ugw1gDrVK…
G
Did he ask himself, “I can BUT should I?” At 77 going out, all he has to express…
ytc_UgyBCWahZ…
G
I just realized that i think we are starting to use ai a bit too early before hu…
ytc_UgxB75Rso…
G
On a funny side note, I know a guy who was pissed off about facial recognition c…
rdc_g145xcf
G
@LouisRossmann @4:09 "If it works, when there's no license plate installed in th…
ytc_UgxL-pAQw…
G
Remember that AI knows everything you do online. It KNOWS what you think and you…
ytc_UgyBcpFVl…
G
A Review By AJ
I’ve just finished watching “Lost in the Hype: AI Will Never Bec…
ytc_UgzLYEzJs…
Comment
Excellent thought experiment, but the video ignores some of the possible outcomes. It is quite likely that most of the other cars on the road will also be SDCs (self-driving cars), which means that they may be able to react in time to avoid a collision all together. Consider: an SDC could actively keep track of what other cars on the road are automated vs. driven by humans at all times. In the event of a potential accident (such as the falling boxes) the SDC could put out a "distress signal" to the other SDCs on the road, and then swerve towards them. The other SDCs could then accommodate the swerving car by moving to the side, accelerating, or hitting the brakes, and they could do this with the knowledge of what other cars around them would and would not (i.e. human-driven cars) be able to react in time. Thus in many situations an accident could be avoided all together.
That said, there is still a chance that all the cars around you are human driven, so the video's thought experiment is still relevant.
youtube
AI Harm Incident
2017-06-25T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggQprZepafZ1HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghRftAajpYgC3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiVNu11IS5PiHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugis3gL-vgXrpHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj3h2tFVAqPSngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugj5A0pJm2zcoXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggIGhHRenxDK3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjS60trIUKAvHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggCEcSJA552hHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgitMxhB_OZFhXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]