Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't disagree that there is a problem, but it's worth pointing out that nearly every clip you used of an AV doing dangerous maneuvers was a Tesla. Tesla's are widely regarded as among the worst systems available for autonomous driving. Their cars are legitimately less safe and a lot of people in the industry dislike them because they really will just use people like guinea pigs. The same thing is generally not true of the others, though there are of course instances of crashes, it's impossible for that not to be the case, even if driven perfectly. It also seems a little silly to just entirely disregard their safety data. Yes company data can't always be trusted, but pretending there is no value in it at all is simply not true and quite handwavy, and it's simply not even true that the company data is the only data so just look at other sources if you don't believe it at all. Saying that Waymo is bad because they "use lasers to scan your town" is also a little hand wavy and comes across as fear-mongering and disingenuous. I'm not saying that we need to defend the megacorporations or anything, but AV does have the ability to make roads safer and more efficient for everyone. While deaths aren't an acceptable tradeoff for progress, it also just doesn't really happen (except for Teslas locking people inside, don't buy a Tesla). Generally, Waymos are legitimately already safer than human drivers. The cars drive very regularly across many large cities almost entirely without issue. The issues they usually get (though very rare) are minor annoyances, like driving in circles or when they would all go back to their home base and cause internal traffic.
youtube 2026-02-11T16:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyzb-896IknvLKXd0Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTNrQ5vvypk0xxoiJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzIuOlJ6BMj2qMw0rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwq8lBAqxYa4dV4paB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyNUOUlecBZNzTSGdx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0Y6EQ_KzGMeUQZAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEA9ONgqIAYH061YJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvrHSxKVjppgFk2jh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwuw5yjyWRpL1lro694AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgytH4OWu-84YEPmfjR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]