Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@johnnybc1520it's definitely an impending doom. Currently underway.
Not many p…
ytr_UgzLhEvlG…
G
funny thing AI........'better than humans' they claim.........
Yeah well to Pala…
ytc_UgxDQ68eZ…
G
We need a disaster caused by AI. And soon. It needs to be significant enough, wh…
ytc_Ugz--w5v9…
G
If you study the Transformer architecture and Attention Is All You Need, and exa…
ytc_Ugy5ssAdg…
G
What a terrifying industry AI is…like why are there such evil people in the worl…
ytc_Ugwj5ZYwN…
G
AI actually means Alien Technology. It's entire purpose is to destroy the human …
ytc_UgxfYDeDa…
G
see: Blake Lemoine who negates your every claim.
You only think AI isn't conscio…
ytc_UgyEBEZbU…
G
Again, if you don't want your art incorporated into ai art generators, stop post…
ytc_Ugyt_XLEw…
Comment
That's because they removed the forward looking radar and rely on cameras - that can be fooled just like the human eye, particularly by huge dark objects like the underside of a semi. The video that opened this segment was just such an example. The tesla did not react at all until the cab, being what, was fully visible. By then it was too late. A forward looking radar, such as the one in most modern cars for accident avoidance and adaptive cruise control should be mandatory in self driving vehicles for the same purpose because they can see where cameras cannot. And the chips in those guidance cameras can be *easily* fooled by strobe lights, being overwhelmed and unable to determine a safe path and therefor a safe stopping distance or maneuvering path.
And Tesla should be sued right off the market for using the words 'self driving' since the cars MUST be monitored and assisted by a human. So it is 'driver assist', not in any way self driving. Until they can remove the steering wheel with 200% surety the vehicle will not crash except under extreme situations the car should only have control of the steering for emergency maneuvering, that is all. The human operator should be in 100% control otherwise - **PERIOD**
youtube
AI Harm Incident
2024-12-24T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugylawk4Wwo2HaZN6Gd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygC_qYcmGjSMiJmA94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3VkzIh25fPT5W7Gh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTcfQFE4lU3kA-jFl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgygOgEYmoGpSAABP_F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxo0KySK5XL5aODMIp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugym8uQepetHZvqvByt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzK4hC1-MQssaF_xpV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzftlsHe8yLGgZJjS14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCPvU_oF2h9AS6a_d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]