Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As much as Elon fails, this also fails on the other extreme. Tesla has some very good engineers, handicapped by Elon? Yes. Starting off with a crash from 3 years ago is misleading, by a lot. With the many upgrades has come improvement. And you should listen to some people who review the changes and this past year has apparently seen a lot. And have you ever asked what LiDAR consists of? A light source with the ability to strobe, super fast; Teslas have top end lights capable of this, check; light sensors or cameras, check; a fast computer that can measure when light was emitted to when it was reflected back, fast computer? Check. The problem isn’t Tesla, the problem is no AI understands how to tell the truth, they just want to generate answers. This is a fundamental flaw. Beating that down is ambitious. Is Tesla FSD safer overall? That is the correct question, like I was telling a guy who doesn’t wear seatbelts, the fact you MIGHT be in an accident where not wearing a seatbelt is optimal, it is unlikely. Tesla seems to be like this, only big numbers can measure it, not our tiny experiences. I think the answer might already be yes, but A, we can’t take Elon’s word on this, and B we must be entitled to see the numbers for ourselves.
youtube AI Harm Incident 2024-12-18T14:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyRnbOeHcCraG7CJLt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgxbyUv6IkzNIsXjeKZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxJ1fhEWSERXHPfDLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzW7kTwlB6aOjVxuN94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxRpjfrqyJ9xrInG5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyH3U8SZm2UbqCob314AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwv9DWJci226HKmKHt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy1auMKXOYtS4AE9FN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy-FKaFihGCJVk_BiV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyX4NbPLOVGph38yVl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]