Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is only about 75% accurate. I have had Google Bard down to 50% on even simple…
ytc_UgwZTM8gt…
G
Fortunately I have no friends, so I can talk to AI that all I want :D…
ytc_Ugz380moQ…
G
If only people paid as much attention to specific threats to humanity like clima…
ytc_UgzoHWpV1…
G
But let’s make AI good for us normal people I’m not sitting around making video …
ytc_UgyQy14kB…
G
3.5 million truck drivers are just one step in automating people out of jobs. As…
ytc_Ugwm5ko2X…
G
Womp womp. They should take it up with the government if they're starving. Not m…
ytr_UgyifFMA8…
G
Irs really disgusting how people STILL make fun of/hate on people that use ai ar…
ytc_UgxTqLILO…
G
I once heard that a class action lawsuit against AI companies for stealing artwo…
ytc_UgxMK94tK…
Comment
+BosonCollider
Yes, they do need to be perfect to be better than humans. That's the real question the video is asking. If self driving cars can't be better than people, I won't bother with them. The video was actually talking about how a self driving car would improvise during a crisis situation and the criteria it would use in order to make those decisions. I'm not too comfortable trusting flawed programming or flawed crisis management decisions. No thanks! I still trust "flawed" people more. I still believe that the majority of people are good hearted and don't want to intentionally hurt others. Call me crazy, but I have more faith in people than devices that follow cold programming. I guess we'll all just have to wait and see what happens with the safety records of these things once they're made. If they prove to be statistically safer than human drivers, I'm open to it. I'm not closed minded to real and factual progress. We'll just have to see.
youtube
AI Harm Incident
2015-12-10T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugi-ra97OFAYf3gCoAEC.8A2x-6Y9iR39_jA1MigRw-","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggcjG7wPcXM-ngCoAEC.87ksLSYwmAW87lRqc_5nOt","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgjbGUooE19fn3gCoAEC.87ae9OwYcWP87aeSIvS21k","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgibjtNUDEehjngCoAEC.87_AnhDBK0Q87_DW-CiC2P","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Uggm5BdzwhyWVngCoAEC.87ZJkl4btdC87ZRqdCDAIY","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zv0jNj-Ag","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zx3NNhY6U","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87ZxquYYaiQ","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggP-iFt14eaaHgCoAEC.87YkvCWMel-87Zi3ixQABR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UghURWjOQRHtGHgCoAEC.87XLJSTRT9v87clu7Fezdn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]