Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I never knew Clay the carpenter Guyida THE UFC fighter who fought Diego Sanchez …
ytc_Ugzx5_ovW…
G
they praise AI, but in an insulting way and then says that we attack them first…
ytc_UgwmrkusS…
G
Next these robot is going to be holding a tool dark times is heading out way…
ytc_Ugyw0A0ZS…
G
So Ai is the real life skynet that can one day decide to terminate humanity if i…
ytc_UgwM4QRjQ…
G
@miguelplays2921 My replies keep getting deleted, so this is very brief. Use you…
ytr_Ugxw8UX8r…
G
this is inaccurate. AI can't iterate. that's what makes employees valuable (bett…
ytc_UgyEz-NB5…
G
I bet AI can do that by now.
The problem is it learns very, very, quickly.…
ytc_UgyfY36aL…
G
Although this video is six years old, I can tell you for a FACT that the technol…
ytc_UgwOk1zwv…
Comment
The problem with this concept, is that you do not consider that the self driving car was not keeping a safe distance from the truck in front of it. Even as a human driver, you are expected to keep a save distance, extra distance should be added because it is a truck that is carrying an unsecured load. At least visibly you can see it is a high risk load as it contains stacked items.
Also, the example shown clearly shows unsafe driving of all participants, one you are not suppose to pass on the right side, and stay with the flow of traffic in one lane. So the human factor of bad driving is huge here.
Now, you also forget that over time more cars will be autonomous, hence more and more cars will keep in a safe driving position. Hence this problem will eventually become irrelevant. You could also argue that the decision that was made by the "programmer" for the car is no different then the random choice you make at the time.
Then, when an autonomous car makes decisions it would also consider new situations within fractions of a second. If it swerves and then detects another collision it would be able to quickly react to that. In theory the car might actually swerve so quickly and accurately it would not crash or hit anything.
I believe this thought experiment is just blatantly ignoring obvious factors and how different humans vs machines react. Especially when it comes to reaction time. That already makes the concept pointless. In the end, you still have the human error, the fault lies with the person who did not secure the load.
youtube
AI Harm Incident
2017-02-14T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggPlXqhTyqn-HgCoAEC","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgijP7n1AYDAFHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjSelYS_yNxMXgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghtfnAXloUXangCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ughv4M1zM_ZhFHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UggWc282B73l5ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugir1uoAgHGQ63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghiAb5OOQ50H3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghKAohdhKOGKHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjB5UYNyemZAngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]