Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Musk has applied to get his "Autopilot" to full automation pushed thru regulatio…
ytc_UgygC_qYc…
G
@dionlindsay2 no i hear the AI. i know when AI has written a script. i know when…
ytr_UgyakCQdr…
G
these guys are idiots..if a robot has any intelligence at all it will figure out…
ytc_UgyHqnYF1…
G
Can I spot a fatal flaw in Tesla's autopilot? Yes, "Tesla's autopilot"! I think …
ytc_UgzJNEfM7…
G
@AzureWolf It’s not a solid way of keeping peace though. Mutually assured destr…
ytr_Ugy7M5U6V…
G
Lol, learn what the word "stealing" means, and come again.
Also, you can't steal…
ytc_Ugynyq7FX…
G
Remember the fuss and fud they caused about bitcoin’s electricity use?
AI data …
ytc_UgwTsV9Rf…
G
Outlaw ai there has to be a candidate ballsy enough to run on that. This would f…
ytc_UgzTfHV7O…
Comment
While an interesting thought experiment it is still limited by its own logic. In the real world such problems can be solved in a number of ways, such as adressing the problems themselves rather than the consequences: "don't tail a truck that close" would be the simplest suggestion (or rather don't trail a truck with open cargo at all).
These things won't be programmed at all, in this situation the car would simply treat the falling cargo as a suprise obstacle in the road and will simply crash into it. A car that "chooses" between two people would be illegal since it would, then, be an AI choice rather than a human's choice/error.
These errors and mistakes in programming will happen in the first couple of generations of autos and will tend to zero as time and experience is accumulated. It's the transitional period.
youtube
AI Harm Incident
2015-12-08T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggmIyJ8SloWNngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghTisOhXvg2MXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggJ7uf4xwzHrHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughd4nDqmE0otngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghTs3eIZEp4CXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiT1_uxg4Qf93gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgiiYSCGtUOQQ3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ughs2ea7-kE5XHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj_gIAyUkWWl3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggO5i8Su4Fd-HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]