Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you were in a cooking competition would using instant noodles grant you as ch…
ytc_UgwCuf6G2…
G
As someone who loves AI, this isn’t surprising at all. From my experience AI is …
ytc_UgzkYseA3…
G
Alert, alert now they are having you scan your palm prints when you go for your …
ytc_UgzvHHn6y…
G
You should start with what algorithm is NOT. Algorithm is NOT source-code. Algor…
ytc_Ugxxae3I1…
G
American citizen and an American voter here who voted against the menace in the …
rdc_gbjmpgg
G
Cool idea. Subvertion of AI models that scrape Internet for data is fun!
#B…
ytc_Ugx6jrdsA…
G
In the 1970's, my father authored several books on artificial intelligence and c…
rdc_ctian0n
G
I wish Ai would hurry up and destroy humanity so I don't have to hear about AI a…
ytc_UgzirYfte…
Comment
There are two possible scenarios for a solution:
1) In a self-driving car city, there are no humans involved, meaning there is no human error (there wouldn't be motorcycles). All the cars will be connected to a network where the cars can communicate with each other. In that way, it can be created an instantaneous response to minimize harm in all humans around, for example with the car on the left manoeuvring as well, to create a controlled accident with less trauma. The remaining cars will have to stop or manoeuvre as well, all at the same time.
2) In the case that not all the system is connected to the network, the person to be affected is the same owner of the car, even if his life is at risk, because the casualty came on his way and that doesn't mean that he should put the life of others at risk too (it's logic). However, there could be better cars with a most equipped technology, that in the case an accident is going to happen, the computer can detect immediately and, unlike humans, it can deploy a responsive mechanism to try to protect the human, no matter what happens with the machine, like ejecting him before the accident happens, or transforming the car into a giant airbag, or even if the impact still occurs, the human can be covered by some sort of protected space inside the car, before the accident happens.
All these things are just a matter of research and try to find the best solution, instead of just being scared of self-driving cars and keep avoiding them for "ethical reasons".
youtube
AI Harm Incident
2017-06-08T15:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghMzFQ5uciXyHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghSVao5v-7LzHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghWZB_DNXhaTXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj7Q3CElinFQ3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjTiwroBtb2T3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjIFRBxgjA2tXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghlT0jEO-duZ3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh8Tr7F8wrmeX3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugi0hd2FnlV7Z3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugil9BPZ0b0LongCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]