Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There CAN be half truths as well as different levels of consciousness. So ironi…
ytc_UgyeDRjL6…
G
Every day I am dreaming of AI taking over. One world AI government to start with…
ytc_UgzvDpWy3…
G
ok but if you guys money from it, then the Money should go for the AI…
ytc_Ugw51CwNo…
G
Robot can really help us and in the future robot is able to save us.…
ytc_UgxmrcTWP…
G
In the we will be able to purchase a robot wife online and have it programmed to…
ytc_UgwQWgTyX…
G
I think you should take him up kn his challenge of getting these AI leaders on t…
ytc_UgxZr0TF0…
G
To cold fusion. Why the Ai underperforming? Because people do not yet know how t…
ytc_UgyI8DMnB…
G
The conclusion I have drawn from using chatgpt is that it is nothing but a sligh…
ytc_UgwIB-O89…
Comment
I guess at the end of the day there are three people who are dead now because of similar accidents involving the same AI driving system, so arguing about it is kinda pointless when the proof is in the pudding. They were all accidents that an attentive human driver would not have caused.
But these drivers weren't attentive, and the fact that "the driver is still in control and is responsible" is the entire problem here. Humans are inherently bad at exactly the vigilance task "autopilot" demands of them, that is, remaining attentive for long periods in which they are not actively doing anything, with the need to suddenly react with very little notice to a serious and complex situation. As nice as it is for Tesla to say that it's all the drivers responsibility and fault if something goes wrong it's based on an unreasonable expectation, and we've now seen the results first hand a number of times.
The fact that "autopilot" has repetitively shown it is incapable of performing in the way that an attentive driver would perform isn't acceptable for public roads.
youtube
AI Harm Incident
2022-09-04T21:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugyvgjaq2iSqCaKCETl4AaABAg.9fZyR6xVMg39f_O7OpQGwT","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxhIOY3ZdRCMdCBt5B4AaABAg.9fZoXAbtGwL9fdz5UNAZQh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw5sX7pKTcOSijZYTV4AaABAg.9fZc3jNxmWt9fZsX7oi8DW","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxCHY3MvQMVl9-7bRp4AaABAg.9fZaH2n1qlw9fh6mTqQrCn","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytr_UgwPc8BLw_4poaCMxgR4AaABAg.9fZG-rO-kVx9ffu2OLB8YI","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzdyq7FCgl-kjhMkAJ4AaABAg.9fZ19_-cMyA9fZsS827fHr","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugx7eNenlBoUqnrEtml4AaABAg.9fYkwJ7Vvi49fZQk5147pY","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzoRtGKCUlS1SQxhr14AaABAg.9fYfTsF59BK9fZJ5-Irlr_","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxeMiGEGA4wlbEQZFR4AaABAg.9fY_lREKAJ09fa0ohft2Rj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxYVPjp3WMdRJzYO6x4AaABAg.9fYUjXVhSyf9fa3bhVc4lw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]