Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All I’m afraid of is will this impact ai development or simply just the stealing…
ytc_UgzPHhfyA…
G
@Speaker-Beater if I can control AI picture creation the same way I control a p…
ytr_UgxLUCpvR…
G
The facial recognition the IRS made me sign up with (ID.me) back in January to l…
ytc_Ugyh-jpEZ…
G
interesting. penrose seems to be arguing a semi-metaphysical standpoint here. co…
ytc_UgwZHNaew…
G
I'm a disabled digital artist, I have multiple physical and mental disabilities …
ytc_UgwZL9jOa…
G
This is so old... No one built with "ancient technology" called AI and electrici…
ytc_Ugw8qYjlA…
G
Hi Charlie, First, thank you so much for the great introduction on ChatGPT. Your…
ytc_UgzWEL7lw…
G
I don't want to condition myself into acting as if the chatbot was an actual per…
ytc_UgxAtZNXD…
Comment
As a previous Lecturer of Law in Rule Production I am weary the all obfuscation of the real world solution to this question which is easily resolved in real life. Indeed law makers around the world have faced similar issues many times before with self driving trains, elevators, escalators etc. etc. And the answer is dead simple. If the car must choose between several possible outcomes it chooses according to the following schematics totally disregarding the individual status of the persons involved - 1) the car always first tries to avoid deaths 2) the car next always sacrifices the passengers (this is why a train with 500 passengers will emergency break for one person - even if the risk is multiple deaths on board btw) 3) the car sacrifices the least amount of people outside the car 4) the car goes for minimizing the individual damage 5) the car sacrifices other people on the road first 6) the car sacrifices people in traffic before people outside traffic - such as someone parking) etc. etc. Easy peasy a rule maker were a long standing guiding principle is to let those who choose to take risks run the risks. You get in the car - you accept you go first. You lump together - you know your risk is higher. You are onto the road - you know your risk is higher. You are in a traffic situation (like in parking) - you know your risk is higher. The reason this works in real life is because self driving cars are much safer in average than people driven cars - so you have the principle of average risk taking maximized throughout the system. The only interesting question is if we always should always sacrifice people in road traffic before people outside road traffic, for which you can make a very valid case, here point 3) may however prove to be too strong a principle to disregard at any time. I frankly fear the question is only being perpetuated widely to hide the real issues with self driving cars which are about privacy, ownership of and access to personal data etc. [You can test my principles on the film's questions and see that they hold 100%]
youtube
AI Harm Incident
2021-02-06T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxzrJFHtwQVGAhSoTR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU5tCZM8HakEDvsFl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMY4iDTn7n18gY6-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4ux_V7SfOPVtCZ9V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxScl_CUE54ZUahNwt4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxSt_7nWVXXZuuXwvV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwalSdv_7oC0WE64eZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzo1Jhy8gfrjzTAU1Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxdohbhxra2IeZqFE54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxx9whhaDWkfEJOjy14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"}]