Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For everyone saying AI is the devil, please stop. PEOPLE intentionally created t…
ytc_UgzEf6L9f…
G
@YourLocalManiax thank you for your comment so ai learn how it works better the…
ytr_Ugyb8LvCH…
G
i always imagine that in a far far future, an artificial intelligence website wi…
ytc_UgwRoQyGy…
G
They are training off of us. Each one of us has trained them in someway. It has …
ytc_UgwIAx2g8…
G
We need ai. I found this video tact-less and not as good as the high quality ai …
ytc_UgxQAcLrk…
G
These discussions were held a hundred years ago. They decided then who should re…
ytc_Ugw5B1jfc…
G
Why did we enable ai art in the first place was ai chat bots not enough…
ytc_UgzyX6iD6…
G
Very insightful. Please make more videos explaining about this topic and also ho…
ytc_Ugy6aa1QS…
Comment
Until a full investigation is completed, all this conjecture is just that. In terms of miles travelled per fatality, Uber is actually doing poorly after this incident compared to human drivers in the US. Whilst 1 data point isn't a trend, those saying that autonomous vehicles are already safer than humans will have to wait some time further until the data proves them correct.
There is a point, perhaps where AVs are proved 3x safer than human drivers, where we can start to put more of these vehicles on the road and refine further both the software and sensors until they're at least 10x safer than humans. That should be the minimum goal and there should be targets set by governments that improve year on year so that in 50 years time, AVs are something like 100x safer than human drivers ever could be.
youtube
2018-03-21T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzh3xXJXK0zRNgmg0d4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSjjopJfOzKxv9PmR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwF5eE7yIiiY23HVvJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwqleYC_hk9_4zqpcp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiuD082x7C6SufLhF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw3G6aTXmaflO5GAgd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxsi0kG-PK6i58-2MJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw5SUnDN1eQ7ipnL9h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMuX8r3eKz0xK2-gV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzx3IxvQPlS_4Vfx5x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]