Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I will never be scared of AI and the way they think because they think the exact…
ytc_UgwU7mRYr…
G
This guy is grifting. Unless we get some fundamental break through, LLMs will ne…
ytc_UgwZCArP4…
G
I asked GPT What should we as humans put in place (regulations) to ensure AI is …
ytc_UgwklHhmS…
G
Bare knuckle boxing, where the human's hands are flesh & bone, and the robot's h…
ytc_Ugx3_hRmf…
G
Humans who can do art can enjoy doing art, those who can't do art can use ai💁♀️…
ytc_Ugx3uWR3g…
G
AI is a FAD, something that is entertaining but EXTREMELY DANGEROUS.
It's kinda…
ytc_UgwjDaBTu…
G
The proof of this being wrong is that AI agreed to generate this video 😂…
ytc_Ugw_JsH37…
G
There is so much potential for a untopia to manifest yet most choose fear. Also,…
ytc_Ugy1zVT2Q…
Comment
Wow, 60 Minutes Australia, this video feels like a textbook hit piece, cherry-picking outdated incidents to paint Tesla’s Full Self-Driving (FSD) as reckless while ignoring its life-saving potential. Poor journalism—mixing up Autopilot (a basic cruise control-like system) with FSD (a far more advanced suite) is sloppy and misleading. The accidents cited? They involve old Autopilot versions, not the cutting-edge FSD now rolling out in Australia, which their own later video (posted ~10 hours after this one) shows navigating a full drive with no major issues. Check it out for a fairer take.
The claim that Tesla drivers and others are “guinea pigs”? Let’s be real—human drivers are already guinea pigs for each other’s mistakes, causing ~1.2 million global deaths yearly. Humans are basically monkeys in cars, prone to fatigue and distraction. Tesla’s FSD in Australia, with eight cameras and a high-speed computer, already outshines many human drivers in key scenarios. Other YouTube videos from Aussie testers—like those on channels covering FSD—show it handling roads brilliantly, even if parking lots aren’t its forte yet. It’s not perfect; supervision is required (just like with cruise control), and Tesla’s crystal clear that drivers must stay responsible. That guy who crashed while picking up his phone? That’s on him, not FSD—Autopilot’s no better than cruise control, and he wasn’t supervising. Same crash could’ve happened in any car with basic cruise control.
The lady griping about “problems” is stuck on past software versions. FSD’s current iteration is leaps ahead, and it’s improving constantly with more data—Australia’s drives will only make it better. The goal? Slash those 1.2 million deaths. The US version’s even more advanced, with more data and parameters pushing it toward superhuman. Critics whining about Tesla’s camera-only approach (vs. LiDAR/radar) miss the point: humans drive with two “cameras” on a swivel; Tesla’s eight cover all angles. The brain—FSD’s software—is what matters, not piling on sensors. LiDAR’s a distraction; software decides if it’s safe to drive, like avoiding construction zones (an issue misattributed to sensors in the video).
The lawyer claiming Tesla calls FSD “fully autonomous”? Flat-out wrong—Tesla never said that. They’re upfront: FSD requires human oversight. And the cherry-picked YouTube clips showing hiccups? They gloss over countless smooth drives. Some haters seem to bash FSD because they dislike Elon Musk’s politics, not the tech itself. This video dwells on old Autopilot crashes and barely nods to FSD’s future—saving millions of lives. Every update makes it safer, and early adopters are proving it’s worth it for long Aussie drives, letting you relax while it handles the grind. Let’s focus on the vision: safer roads for all.
youtube
AI Harm Incident
2025-10-20T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyd0Zdl2P5kRCjAY-14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx81aIoRsYIHWjxTqZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwxebj6TaeyyxQDUcZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwVUmz7fEZ6ACnvu694AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUIqommoShUslgoJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlknNVIenBxRW8iMV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwmc6fCfHm-yZEVjmd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2sMFQ8Ipn1MIjnwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyB5UfDfI6dkAtvdBZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyYvDvOPnHxA67TAB14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]