Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah I'm sure the developers of these AI never heard of Asimov's laws ;) This is…
ytr_Ugzrlpz7h…
G
you can't also say Mechahitler was one of the most dangerous things to come out …
ytc_Ugw9iuJxf…
G
Are you sad because it uses existing copyright data or is it something else? If …
ytc_Ugz7rOQDv…
G
This guy is old and he has kids that don’t have kids at his age either he had ki…
ytc_Ugz4MIEYg…
G
Ai will not even replace custome care people.. Because like in India common peop…
ytc_UgwQvMToE…
G
there probably won't be an argument against democracy that hits harder than "AI …
ytc_Ugz1rnUaE…
G
Advocate here. I will guarantee that no Ai can understand and turn a situation/i…
ytc_UgzJynz5X…
G
it is also important to teach ai that there is a ton of metaphor in the human ex…
ytc_UgwKprWb8…
Comment
The reality is that a real self-driving car's algorithm is not going to have a specific condition for being boxed in on all sides by specific permutations of vehicles. It probably won't even figure out what kinds of vehicles are out there and it certainly won't try to figure out which bikers have helmets. There will be a series of top level, general case decision trees from which specific responses are emergent. The code will basically just tell your car to take the path which takes you away from all known obstacles as fast as possible. In a thought experiment like this, it's going to end up being something like your car will swerve in whatever direction there is slightly more room, while braking to try to dodge the falling boxes.
Ethicists seem to like to think about technology in an abstract, perfect sense. They're trying to figure out how to program an omniscient car AI to respond to contrived scenarios while actual accidents are going to overwhelmingly result from software bugs and hardware failures. If a car AI is good enough to quickly and reliably figure out the complete casualty result of every possible action it can take, then it is definitely good enough to just avoid trailing a giant ass cargo truck at less than stopping distance.
youtube
AI Harm Incident
2015-12-14T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgjItq0wivzFzHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi3UjQWwYBga3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugil7mqZ96nRsXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgizhDQN0tfbqXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjdML6iup9kxHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiDITa8mouAQXgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Uggk2g1O4hSYuXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjdS9_U-Ytg-3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgjIfcNAortGP3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghasmfeHrS-OHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]