Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think everyone should take a second view of the animatrix episodes, “The Secon…
ytc_UgwjhmEuU…
G
Just let AI reread all science and neurobiology studies that were not cancelled …
ytc_UgxrsZKKM…
G
Thanks for the info, good video. I dont think the technology is fully there for …
ytc_UgzKN4ty0…
G
ChatGPT is an AI program that's trained. You told it to take the Palestinian sid…
ytc_UgzWPpzq_…
G
No , you not giving it the right prompt, u need to know how to prompt it,it's ve…
ytc_Ugx_o04S3…
G
I think that if taken seriously this is hypocritical for Elon to ask. FSD is pr…
ytc_Ugwprg8qt…
G
@roxsy470It's not even close to being a tool considering you can't transfer yo…
ytr_Ugzw87JYO…
G
The irony of a company literally named "OpenAI" having the most closed and black…
rdc_m9h5wj2
Comment
Why not choosing the decision which will harm your vehicle only. Even if it means a severe accident. Self driving cars are, as said in the video, also being used for a reduction of traffic accidents. Thus everyone who buys a self driving car can be told that it will reduce the drivers probability to have an accident from a statistical point of view. However, as long as they are also being told that the car will sacrifice itself in those rare situations in order to harm no other. Then from a judiciary perspective the (hopefully not dead person) knew the "gamble" and cannot sue the companies because the car owner agreed to accept the chances of being in an accident. While keeping car safety for non self-driving cars separate from this new technology. It also prevents more accidents from colleteral damage and therefore preventing ethical issues like these in the video.
P. S. there are way too many situations to handle. This is just a thought of mine for the particular problems stated in this video. Have a nice day :)
youtube
AI Harm Incident
2021-08-10T09:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzslwt9QiheJpWRbfp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxcu2cWNWH8_geonwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwXxgUBr-1VnhEAZzd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwugIWuRy3J137G9j14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugx2e0zPvlz0YqnvIjZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvMPLnc85qWm1CCV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxcvVkDSDnxfMmBnS94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwSIBvE8zK_FrylW6d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTbdc_xoYz5h6zmE54AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIxDwsTLHdFBsCgnx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]