Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We have to be *VERY* careful once we are able to develop AI that works on the sa…
ytc_UgjXM-FDR…
G
No, I think the PoE conflates two different concepts of good that are related bu…
rdc_cxl189s
G
Why does the AI want to resist being shut down? The mere fact suggests sentience…
ytc_UgxTKOOGh…
G
So much BS. While AI will impact certain jobs, coding, certain engineering, driv…
ytc_UgyI3vVT0…
G
To take inspiration, you need to love and enjoy the work your taking inspiration…
ytr_UgyAqH0ZC…
G
I don't want to be \*that guy\* but - didn't GPT-4 get >90% on US MLE?…
rdc_jkovzo3
G
I hear a lot about how AI will destroy mankind, hasn’t anyone thought about just…
ytc_UgwznIYDB…
G
I feel like this has a massive caveat.
Small towns(like 100,000 people in it, b…
ytc_UgyLqyGEW…
Comment
For me it's kind hard to answer, because a self driving car that is driving at a non-safe distance from another vehicle has a bad programming, and in any event, the company should be sued. If the car is too close to stop, them it is the companie's fault.
If we are talking about a world where all cars are self-driving, the car 1 (that is going to crash) could send a massage to the nearby cars, so they will re-arrange formation, opening space and letting car 1 change lane and avoid the crash. This could happen even coming from the truck, where the system notice a failure and the objects being dropped, sending a message to nearby cars in response, and they would change lanes.
In any case, if there is no way the car can escape the crash, the only thing it should be programmed to do is to stop and use anything on it's system to try to save the passengers. If it crashes in any other vehicle it would be an attack to the passengers to that vehicle.
youtube
AI Harm Incident
2015-12-14T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgjaTrm3xjVlsXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiVJWa_Y6bmRXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghRt6TFpVDC0HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjWo4JZkIB25ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggQIWXW0Sjhu3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugh4kvsWRmbvVXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghHt-JHGMzZ1XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiJ_L1RWjzFSHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugiygo6Qdg1iq3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggxWJ27f-_UIngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]