Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This 60 Minutes segment is pure legacy media fearmongering. No stats, no context, just accidents cherry-picked to scare you. They skip that Tesla Autopilot crashes every 6.69M miles, 10x safer than the average car's 702K miles. Human errors cause 90% of 1.19M yearly road deaths. Why no mention of other driver-assist systems or the 7,500 non-Tesla car fires annually? At 2:55, the supervisor lets FSD enter a no-entry zone, but no harm done. It's supervised, and Tesla improves from disengagements. The clips at 4:00? Old beta FSD, not today's v13.2.9. Dr. Cummings says self-driving cars don't reason? Wrong. FSD calculates 36 times a second based on training, outdoing most human drivers. Blaming cameras? Waymo's lidar and radar setups fail too; software's the issue, fixable with updates. The 2019 crash at 9:45 was a distracted driver on basic Autopilot, not FSD, yet they blur the two. Tesla should upgrade Autopilot, but this piece is crafted to spook buyers away from tech that saves lives, all for ad revenue. Is 60 Minutes unbiased? Check the expert's agenda. Got FSD experience, or just buying the fear? Tesla's sold 9M cars without ads, Model Y's the top seller, and legacy auto's mad. Support innovation, not scare tactics.
youtube AI Harm Incident 2025-10-20T04:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyd0Zdl2P5kRCjAY-14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx81aIoRsYIHWjxTqZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwxebj6TaeyyxQDUcZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwVUmz7fEZ6ACnvu694AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyUIqommoShUslgoJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwlknNVIenBxRW8iMV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwmc6fCfHm-yZEVjmd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw2sMFQ8Ipn1MIjnwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyB5UfDfI6dkAtvdBZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyYvDvOPnHxA67TAB14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]