Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you have unethical questions, Data of each person are just limited what AI can d…
ytc_UgyASBInZ…
G
Some day science may have the existence of mankind in its power, and the human r…
ytc_UgxORZPAz…
G
As an individual who just got into AI song covers and fucks around occasionally …
ytc_UgxTqO7oG…
G
At the end, it was curious to see how you offered one Dune analogy and got hande…
ytc_UgwZOITcE…
G
The original video the AI tells the dude Revelation 13:18 verse. The number of h…
ytc_Ugw_jfCpF…
G
All these execs and CEOs just giving fluff answers.
Current state is as the MI…
ytc_UgzeJuSAp…
G
dude... he does the exact same thing to any other ai, male or not. maybe stop pr…
ytr_Ugx2vsFF8…
G
AI won’t solve humanities problems, nor will it create new ones. It will simply …
ytc_Ugwh7MVdm…
Comment
I think everyone overestimates the amount of data Tesla actually has. There's a Mr. Subliminal aspect to this entire self-driving via crowd-sourced snippets idea that's been around for years. The Cruises and Waymos et all collect 1000s of gigs of data per car, orders of magnitude more data from individual drives to get their better yet still imperfect systems. Moreover -- esp in the case of data purportedly collected from thousands or millions of actual people drivers, even if Tesla had all the sensor and vision data from a given set of circumstances, it would have zero idea about the decisionmaking (inside human heads) that produced that situation. I watch a lot of FSD videos and the amount of improperly labeled stuff that zips through the UI screens (and is never noticed by the stans making them) is incredible. One common one I see is an inability to distinguish between a scooter and a sportbike -- which is scary given the vastly different expectations a human driver has for those vehicles.
youtube
AI Harm Incident
2022-09-13T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgytcoWOg5d6TC-CNGt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWnEcFPJMWPFUQN_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTKozoibJokBH4_tt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsyJYcuDm7zA6vb1R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxsr_WuaGBN7HnXu3t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_MQp3jYBYVbNPF6R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-iMEgtihCVor1JzB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMMR92JlZ2lsCNGEp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwwq2VRDWUGRT725fR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwDysAq0O8qrRhJoqV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]