Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I generally like Steven Bartlett, but this podcast just showed the side of him I…
ytc_UgwMmhVjM…
G
What the flying stick is that?! Tesla autopilot is not programed to shut down 1 …
ytc_UgwLVS9mD…
G
In the USA, under the Fair Standards Act, [is straight up **illegal**](https://w…
rdc_j6fjssp
G
LMAO I Just watched Ex Machina and knowing that A.I. like this exists is quite c…
ytc_UgxNdVNai…
G
Look, nobody cared about all the cashiers that lost their jobs to self checkout …
ytc_Ugwzowcmr…
G
+ he is accusing OP, when the policy/rule states the use of AI needs to be prove…
rdc_kgpyx3o
G
Listen, I'm open minded about what can and can't be considered art. And regardle…
ytc_UgxOg7iPN…
G
and just in case everyone doesn't know this. they get there Facial Recognition d…
ytc_UgyA6zV1A…
Comment
6:40 While I'm willing to trust you if you gave medium-quality evidence that Tesla's cars are overall more dangerous than humans, your "two cyclists are dead" argument is a huge no-no.
If 10'000 drivers kill with a 3% chance and 10'000 AI kill with a 2% chance, that's still 30, and 20 kills respectively. One is clearly better than the other. Such statements without comparison (a.k.a. a Baseline) are highly irritating to me.
By demonstrating that you do not consider ratios, you weaken your whole argument and your personal.
Hospitals kill many people every day; it's one too many; let's remove hospitals! >:(
You get my point...
Don't make bad arguments, especially not when you cover them with cream of emotional manipulation, it makes you look bad.
I personally believe that Teslas might be worse than humans, but I don't have the will nor time to check that out. I have better to do. I wished your video would have enlightened me on that point and given evidence.
youtube
AI Harm Incident
2022-10-02T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwnBYHLKEqXh_e-0mB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwpSsLV-ro0_E5CHLt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxo4o9yo2hLxj-msV94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxZIr0zAr7vk6nDop4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwnldreHGeXyT2ZQKh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzLw2H82mWnX4OXMqJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBPxs3sJxGmbQlV3t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyls7BwIgpDs0M93ih4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwN8uZBJjWtqseMS114AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwlPDycl5TdNN0xkEl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]