Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
my most normal AI chat basically goes like
"A 3 CHERRY CARD A DAY MAKES THE 403 …
ytc_UgwTCOOrG…
G
I think before glazing chatgpt you should look beyond crappy benchmarks and out …
ytc_UgxGpZsT2…
G
What if the real shoggoth isn’t AI, but the human mind itself... and we’re just …
ytc_UgwEXgvAu…
G
Hi, That's Fu@K up and should never happen. But it did. And you can see what it …
ytc_Ugy9JbdnI…
G
Now that automation affects THEM it's important. Hasn't been while it affected l…
ytc_Ugx5APGAZ…
G
This always takes me back to that AI Codec conversation in Metal Gear Solid 2
H…
ytc_UgxJIHQdq…
G
I saw an autonomous car (Prius) spin out on a snowy freeway in Idaho, cross two …
ytc_UgxPWrKJn…
G
Thing is, what about robotics? If we're meant to have AI 'take over all our jobs…
ytc_Ugwf1-Cbl…
Comment
14:20 - "It's not putting us in danger." It is putting you in danger. Any time some automated system does something that's entirely different to what your experience in that circumstance dictates is correct, it's putting you in danger by reducing your confidence in your status and forcing you to assess if you need to take control from a position that is different to what you would normally be doing. The assessment for being in danger, isn't injury or death, it's the potential for injury or death, and while you are in a car, the potential for injury or death is already elevated from most other day to day circumstances. Being in a car in motion, and uncertain about what's going on, means that risk factor has increased significantly from its typical threshold. Just because you didn't die, doesn't mean you weren't in danger.
27:20 - "Can you get booked for speeding?" Yes. Yes, you absolutely can, is the correct answer to this question. Why are you saying that the Tesla won't break the speed limit, so therefore you can't be booked for speeding? I can think of five or so contexts off the top of my head where a Tesla might accidently speed, which means there's probably about ninety-five things I can't think of that would end with the same issue, and given that you, as a driver, are ultimately responsible for your vehicle even when it's in self-drive mode, there is no suggestion that you as the responsible owner, have reduced responsibilities legally, or civilly. Here's one for you. Do you think the self-driving mod of the Tesla knows how to handle the new 40kph law about service vehicles and flashing lights? I'm think that the answer to that is no, which means a Tesla might sail past an active service vehicle at almost any speed, such as 100kph. At that speed, good luck telling the cop that catches you, "oh, my Tesla didn't know about the new law and did that by itself!" You'll be lucky not to lose your license and be criminally prosecuted. I really don't understand why you would say such a silly thing. The intro to your video shows your tester Tesla repeatedly struggling to make simple lane choices to get you out of the city, but you are rock solid confident that it's never going to accidently speed?
The warning for human interaction every once in a while, is a legal requirement. It's to stop people from treating cars with auto cruise and self-driving modes as driverless and thinking they can play games or check their socials or watch a movie. Legally, even in auto driving mode your hands should still be physically on the wheel, so if you need to take over, you can do so immediately, that's why it keeps prompting you to apply a little force to the wheel occasionally. Ergo, the supervised part of supervised self-driving. If a police officer sees someone in a self-driving capable car, driving in regular traffic without their hands on the wheel, it's still classed as the same criminal offence as if they were driving a non-self-driving capable car without interacting with the steering wheel.
This is even more relevant given Tesla just recently lost a civil case regarding on of their cars killing two people while being operated as self-driving by a driver who wasn't actively playing attention to what the car was doing.
Honestly, I am surprised that the Tesla self-driving mode is allowed to be used in Australia, given the hardware the car uses for its guidance systems is still considered far under the basic threshold for legal and safe versions of fully self-driving cars. Even in the US where restrictions are far more lenient, the full self-driving mode of the Tesla's is only legally allowable in geo-locked areas, a hand full of locations that have a huge amount of amassed sensory data for the cars to refer to.
youtube
2025-10-23T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxW_uYZNDumjp2AVKd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgygIqAPyMNiFo27jnp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHCQm7yDjMR7Typ0l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJKOytV5Cmo4va6VV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw1-n9WDapw4QTwha14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9HfFPDzSyc1Q7nkl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz578HlyyjzdUllWiB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwnX8f8uRW2J50QvMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdfAN_jGciW6KUF0Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzHfDoFGfjn3j0ZK7J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]