Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
... basically, if any one "central" artificial intelligence (or an advanced comp…
ytc_UgwfbG7Ce…
G
Okay, call me an asshole, a sexist, mysogynist, whatever, but i this rhat that r…
ytc_UgyAhSs8f…
G
As soon as AI perceives suicidal intentions, it should give the suicide hotline …
ytc_Ugw3Y5Nyo…
G
As a traditional medium artist, I tend to feel the same way about digital artist…
ytc_UgzwcVjKE…
G
It's not a matter of when AI will gain consciousness, but what we can do about i…
ytc_UgzTa4rOg…
G
Yea thats cause for 2k years weve been warned about ai in revelations chapter 13…
ytc_Ugzd_Xy3w…
G
The AI is overrated, anyone who have professionally worked with any LLM models u…
ytc_UgxfBreLC…
G
Will be interesting once the schools overhauled. Sounds like the steiner school…
ytc_UgyTtJ5ru…
Comment
+Xezarious42 This could be regulated by holding the driver responsible for the decisions the self driving car makes. Company B's cars would inherently be more prone to cause harm to others, even if they're slightly safer for the occupants. Occupants of company B's cars would thus suffer more liability on account of their car's decisions in the long run.
The argument only works in a scenario where you could be severely hurt and would rather sacrifice someone else to minimize your harm. But you could never predict that if you buy company A's car it would save your life and company B's car will get you killed. Both companies would have good safety track records if self driving cars become common.
Ultimately people will choose the car that is least likely to hurt someone or cause damage regardless of if it's self driving or not, because they're responsible for that damage.
youtube
AI Harm Incident
2015-12-08T23:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgggitcG_CbrUXgCoAEC.87WDKCb8uB_87Wc8Git_dg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytr_UgiwrXyY1rZ69XgCoAEC.87WAOkvgKe087ZUsAi0XrE","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgiwrXyY1rZ69XgCoAEC.87WAOkvgKe087Zk-pe2Tvs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgjFbGyekK77fngCoAEC.87VyK9Y8dlO87WlJMEeWoL","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgiiYSCGtUOQQ3gCoAEC.87VxmXkagiW87W5Qy0QLaG","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UghlFJ0pJ4lt_ngCoAEC.87Vxapt4mjd87W8NWV8_40","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UghfG4rKbyLwlXgCoAEC.87VxK1IcVdT87W9BZPSPbn","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"mixed"},
{"id":"ytr_UghiyJc91JDD0XgCoAEC.87Vw0Yij8ek87VyZ_yCQtY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UghiyJc91JDD0XgCoAEC.87Vw0Yij8ek87VzogUdvsL","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugg-LAtK9Y2urngCoAEC.87VuXpLFzck87VxWEbSmkv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]