Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This video presents an extremely biased viewpoint and technically falls under th…
ytc_UgxfiSDv0…
G
just pu a job that cant be autonomous, worse comes to worse get ur passport and …
ytc_UgzgL3tjl…
G
This is maddening. I don't require AI. I haven't sought ought. I don't want anyt…
ytc_UgxmmEFbS…
G
You guys talked about xAI and Sora, but don't forget Gemini, like if that one is…
ytc_UgzwLjqn-…
G
This guys is so talking out of his backside... GPT is so lame at serious stuff..…
ytc_Ugx6W9VJ_…
G
> Yeah, a lot of people putting their fingers in their ears and head in the s…
rdc_kupnotg
G
An LLM is incapable of fighting for its own survival, it simply doesn't understa…
ytc_UgzH_A7rU…
G
Except 1 person painted that after millions of others painted garbage...
A "tal…
ytc_Ugxwr-QGj…
Comment
+Thomas Smith There's another big ethical dilemma that this video doesn't address: the potential for self-driving cars to be hacked. Currently, it's fairly simple and easy for new models to be hacked and controlled, even when not self-driving. This could be a simple, horrifying way for people and organizations to kill anyone they don't like, and often without a trace. You hack the car, slam it into a building or off a cliff, and voila, victims killed. I imagine if we ever get a majority of vehicles to be self-driving, we will still need cars with human drivers for important politicians, heads of companies, etc. Anyone who might be threatened with assassination.
Meanwhile, another dilemma, less deadly, but far more common will be passenger road rage at their own driverless systems, for being unacceptably slow, safe, and polite. I live in Slovakia where 70% of drivers will not stop at a crosswalk, and damn the consequences. How would these people react if their car stopped for them? And on a regular basis? If we all had self-driving cars, they would save our lives, and we would hate them for it.
youtube
AI Harm Incident
2015-12-10T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugi-ra97OFAYf3gCoAEC.8A2x-6Y9iR39_jA1MigRw-","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggcjG7wPcXM-ngCoAEC.87ksLSYwmAW87lRqc_5nOt","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgjbGUooE19fn3gCoAEC.87ae9OwYcWP87aeSIvS21k","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgibjtNUDEehjngCoAEC.87_AnhDBK0Q87_DW-CiC2P","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Uggm5BdzwhyWVngCoAEC.87ZJkl4btdC87ZRqdCDAIY","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zv0jNj-Ag","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zx3NNhY6U","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87ZxquYYaiQ","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggP-iFt14eaaHgCoAEC.87YkvCWMel-87Zi3ixQABR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UghURWjOQRHtGHgCoAEC.87XLJSTRT9v87clu7Fezdn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]