Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The motive he says is money, but he doesn't suspect the more obvious suspect. O…
ytc_Ugzy8F-Jt…
G
Utter control by ai bot shit...ahhh its gets better every day done it...
This vi…
ytr_Ugxf5TgO7…
G
I always say "thanks bb xx" to chatgpt just because i'm bored and want to preten…
ytc_UgzjSq4P0…
G
The idea is brilliant. If you pay $1000 monthly to a truck driver who has been f…
ytc_Ugzdrcybb…
G
I saw this other video of an ai making a husband and wife fight there whole life…
ytc_UgzoZ561e…
G
People think that by increasing the number- suddenly AI can think for itself and…
ytr_UgygUUeYn…
G
This is what dipshit right wingers should be worried about. but they are all tu…
ytc_UgzA5fDBc…
G
"hey look at my art" should be "hey look at my prompt" if its AI…
ytc_UgxF0k0Dc…
Comment
One big problem - bridge does not make decisions. Or rather it does them in very predictable manner which allows us to anticipate the results. A self-driving car on the other hand makes decisions analogous in complexity to humans. It succeeds or fails in fashion that is equally unpredictable to a human. There is a reasonable threshold where the car is "good enough" that the manufacturer/designer is no longer responsible for failures, because they are expected to be rare or unreasonably hard to avoid.
The purpose of (threat of) punishment and reward it to shape behaviour of living individuals in a way that is most efficient. That's why we don't punish/reward adults, experts, children or disabled in the same way for the same action even in the same context. In case of machines we may or may not have the option to shape their behaviour directly and punishment/reward may or may not have effect.
Terms like responsibility, right, obligation and freedom gain much broader meaning when you are dealing with agents that don't necessarily have analogue to suffering or well-being, may lack or have superior ability to predict them in selves and others and may have alternative means of guiding behaviours.
youtube
AI Moral Status
2017-02-23T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UggYw13YsQ9UengCoAEC.8PKT6UCB8jL8PKZMQ9hbPu","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UggYw13YsQ9UengCoAEC.8PKT6UCB8jL8PLWJ9pwiZJ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UggYw13YsQ9UengCoAEC.8PKT6UCB8jL8PLxw9x_HD-","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugg3KxPrjlzt6HgCoAEC.8PKSG_JAaDb8PKcX1h2yQC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgjfZ7jGW7cLn3gCoAEC.8PKRmVy9pn28PKWiE7fH7q","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiiAT_fZcx3cngCoAEC.8PKRTi4Lfho8PKS7Fs_o5L","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugh1j66C9k7XO3gCoAEC.8PKRKm0qICT8PKS8yAWZ4y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgjulCwo6IWvM3gCoAEC.8PKRGTCuwb-8PKSaGgm_Ry","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugjjxo2tMS6OcXgCoAEC.8PKQncQUTNp8PKXPtUoP-m","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgggfKyYxs8w4HgCoAEC.8PKQjIdrxlC8PKkf_u6fjb","responsibility":"manufacturer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]