Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@mekingtiger9095 theres already a problem with AI-generated science videos being…
ytr_UgxVPtPKA…
G
Elon Musk co-founded OpenAI and has been talking about this for 15 years? Ohh… B…
ytc_UgypR5HIB…
G
I dont thing Ai feel it can at best copy human reactions like a psychopath…
ytc_Ugy3O8Fmi…
G
Only explicitly legal in 4 states. NY not being one of them, so this is going to…
rdc_cpnpq2b
G
Look at the Movie I Robot. Starring Will Smith is playing a detective. It will s…
ytc_Ugz2AgetX…
G
Ai will only be used as supercomputer not as a robot or something humnoid form ,…
ytc_Ugw3RaMFd…
G
LLM's are hard to build. I usually enjoy your videos but if you know how the dat…
ytc_UgyukatWU…
G
In the long run, this is good technology. The problem is we have to get rid of c…
ytc_UgzMxnA_S…
Comment
The terms autotpilot and autonomous driving are deliberately used in advertising, giving the impression that the vehicle can move safely and completely autonomously without human control. The truth is that all these vehicles (not only teslas) have to be permanently monitored by the driver, which casts considerable doubt on the usefulness of these systems - this also applies to unreliable so-called assistance systems. The absurdity is now taken to the extreme by monitoring the driver with cameras so that in the event of an accident caused by autopilot, he can be blamed for his lack of control over the system.
Let's be clear, these systems are not fully developed and they probably never will be. Traffic and roads worldwide are, in a certain sense, a chaotic system that cannot be controlled by AI - anyone who relies on technology here is suicidal and will end up being punished for naively believing the marketing claims of the manufacturers. With the increasing number of modern cars, we will see more and more of these accidents.
youtube
AI Harm Incident
2023-09-17T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzDmyYGb04_fl5Vhx14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwPan9_yS9N0kZ3Ln94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzD7Z_RuEo0hqILsQ14AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy0Vd-5pqYE53uy_Ud4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy8QI_p5qcZnihxq5Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw_2mObqmhfhrkC-5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyHjykj2neMsIYtQpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzo5urqv9tcQ9rn5q14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-r2lvVjWBCFXTVr94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzycewy1LUka5BbmMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]