Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
imagine being able to have the machine build you whatever you wanted and were on…
ytc_UgwmGlBGQ…
G
Then idk.................LEARN THE SKILL..........Instead of having some mindles…
ytr_UgwFOqTph…
G
I think this is hogwash I think he just want to be the first person to discover …
ytc_Ugz4ENnmc…
G
@epikgamingcat Thanks for the Input! I often work in a somewhat similar fashion.…
ytr_Ugzh3C3bR…
G
@VeryCooIName I mean right. AI doesn’t care about PC ethics. It just tells you t…
ytr_UgycNyrhC…
G
It’s happening right now people…wake up!!! You are being manipulated every day b…
ytc_Ugz8y5L3S…
G
Me after seeing my freinds c ai chats on discord
( Anime girl roleplay all girls…
ytc_Ugw7K-wdQ…
G
I don't think deepfakes are the issue here. Sounds like Korea has an epidemic of…
ytc_Ugwwr-bmi…
Comment
The tech is unreliable. Still over a decade away from full autonomous. If the passenger has to constantly monitor the "FSD" then it's not true FSD, it's a gimmicky tech head toy. People are deluding themselves. Supervised FSD is not useful FSD in the real world, but rather setting the driver up for a vehicle accident. These accident statistics are meaningless, because it's supervised "FSD", not true FSD. How many driver interventions occur to avoid accidents, probably many driver interventions, thus distorting the accident rate statistics in their favour. Those countless moments where a human prevents the car from doing something stupid. It’s glorified driver assist with marketing spin. By the time a driver reacts to supervise mode switching off, in many instances it's probably too late to avoid an accident. If a human must remain alert, supervise, and be ready to take over in seconds, then the system is not autonomous, but rather it’s a liability masquerading as innovation.
youtube
AI Harm Incident
2025-10-20T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyd0Zdl2P5kRCjAY-14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx81aIoRsYIHWjxTqZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwxebj6TaeyyxQDUcZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwVUmz7fEZ6ACnvu694AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUIqommoShUslgoJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlknNVIenBxRW8iMV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwmc6fCfHm-yZEVjmd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2sMFQ8Ipn1MIjnwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyB5UfDfI6dkAtvdBZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyYvDvOPnHxA67TAB14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]