Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s actually not illegal. The article said specifically that there’s no clear b…
rdc_k7kzf3m
G
... but who will by the products AI generates .. if no on has a job…
ytc_UgxR3s_Ta…
G
There have been movies involving AI where they got real better and smarter. Term…
ytc_UgyzqGjeL…
G
I really don't see how a Language Producing Machine might become my AI companion…
ytc_UgwYCcWS8…
G
Openai when they see gemini 3 capabilities: OH nah nah nah we gotta do somethi…
rdc_njh8p4o
G
It kinda looks like the robot that sings I feel fantastic and bro my name is in …
ytc_UgxcAYYMY…
G
I think it's more of how simple humans are rather than how intelligent ai are, w…
ytc_Ugyb0xmVJ…
G
It is scary looking at how it advanced in last 3 years, I have no doubt it's gon…
ytc_UgzEk_I0p…
Comment
Elon has good intentions, but he is deluding himself and the company. Vison is highly flawed, and Humans only manage by knowing alot about what they are looking at. AI won't be as good until it wants a holiday and time off.
As a safety Engineer I am also against FSD, they keep running out of resources and having to upgrade hardware. There is a pattern to this strategy, and its all about ignoring the obvious. I can imagine that in Tesla noone is allowed to doubt the mission. I imagine that Elon has a mindset for this that noone can argue with.
youtube
AI Harm Incident
2022-09-28T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzWThy_31yWJGPWy0R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyG1VHogRheYoD7aV94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzfaddCTVV7jHOcCch4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMVrDvmyCCWG29Px94AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyq9vdbdEM8g6b2wy14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxPpGM4YiM6wBY3aRR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9GQhsuOjt5NpnpaR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMJ6PtPy5wuF4C__N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy63s3lvA0De1gsCO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyJJWwjKUTeZ3n_SKN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]