Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That "friend" saying, "AI isn't even that easy to use" is like ordering takeout …
ytc_UgzB6OfSt…
G
the only way i could see it being used as a tool if it was made morally is if it…
ytc_UgzipBEmr…
G
6:35 As a software engineer, I understand your frustration with how this technol…
ytc_Ugzc2F1Ut…
G
I heard AI messes with people's ability to learn if they use it to learn. Like i…
ytr_UgwWq1jue…
G
We may not be anywhere close to “strong AI” if such a thing is even possible, bu…
rdc_ioeofit
G
Why do people anthropomorphize AI like it's some kind of thinking feeling being.…
ytr_Ugz9khCTa…
G
He’s definitely more comfortable with the questions on AI then the questions on …
ytc_Ugw0-kARK…
G
I value my ability to reason. I have zero interest in using AI. I'm forced to us…
ytc_Ugyp843Bg…
Comment
The current Tesla FSD Supervised definitely satisfies the definition of Level 3, even Navigate on Highway does. The real issue is the time allowance for when the system demands you take over. For Tesla is isn't preplanned it's just immediate. Even using AP, if I wasn't nagged to touch the wheel the car would drive along just fine. Even when it freaks out and asks me to take over often it will continue to drive correctly but it just can't auto-recover from such an event. Tesla purposely says it's level 2 for legal reasons, not because of the capabilities of the system.
With V14 you can clearly see the autonomous robotaxis operating in Austin are essentially running the same version at the consumer version, so the only real difference is one demands some user input while the other doesn't (and we don't really know if there is just a remote operator in the background, but that is probably only when needed and not constantly making the decisions, based on millions of private vehicles essentially doing the same thing).
youtube
2026-04-01T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy4vjl8I7ePbLtLljB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3iO-GiFf89cZSv4R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx5y6cfr63gaO-HQaN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwR8848jPWw3SSkHkp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx9hTHv5jXXfXvNt_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]