Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You mean legal work where a lawyer cited cases completely made up by AI? We're n…
ytc_Ugy9BFVqj…
G
Agreed, AI makes a LOT of mistakes.
Also, when Jensen Huang (CEO of Nvidia) say…
ytc_UgzlP3Xm1…
G
This is why people should read about Jesus. He offers a world much better to liv…
ytc_UgwrEi4Oj…
G
I tell u my story as a software developer i use AI to write code but AI will not…
ytc_UgySMLwDc…
G
man i do agree with you but the people who use ai are often disgusting slobs (im…
ytr_UgxjgNKHx…
G
So many issues with gun control as is. We're not even remotely close to regulati…
ytc_UgznpNOdl…
G
Our reality operates based on probability and observation. This neural network s…
ytc_Ugy4UFBfo…
G
A person watching a video of people watching a A.i realising that she is a watch…
ytc_Ugxc0xFZa…
Comment
I think the central flaw of thought experiments like this is we assume that a self driving car can only drive and react like a person can. Our current traffic is made up of millions of individuals that usually drive faster than they can realistically keep track of their surroundings. And they have almost no communication with each other. If we connect our self driving machines as part of a larger whole and equip them with the sensors and processing power to react properly then most of the issues we currently have would disappear.
Instead of thinking about how a self driving car decides who dies, we need to look at ways to make them able to avoid having to make these decisions at all.
For example, program the car in our experiment to keep a proper stopping distance behind heavy vehicles with open loads. The accident in question could be avoided by simply applying the brakes on the cars immediately behind, and merging cars further back into other lanes. Have the central traffic system flag the accident site once the boxes stop moving and send clean up vehicles.
youtube
AI Harm Incident
2021-07-31T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzslwt9QiheJpWRbfp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxcu2cWNWH8_geonwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwXxgUBr-1VnhEAZzd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwugIWuRy3J137G9j14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugx2e0zPvlz0YqnvIjZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvMPLnc85qWm1CCV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxcvVkDSDnxfMmBnS94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwSIBvE8zK_FrylW6d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTbdc_xoYz5h6zmE54AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIxDwsTLHdFBsCgnx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]