Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is tragic yes, but honestly let's consider the pretty small number of incidents that have occurred with autonomous cars over the last couple of years during this testing phase, which each time gets noticeable media attention and paints the technology in bad light. Now compare it to how many people get killed by a car driven by your normal, everyday fleshy, water-sack of a human (A 2015 statistic puts it at 5,376 pedestrians killed in the U.S. for that year alone. That's an average of 15 people per day!) not to mention all those that get hit, but survive. Now, yes there's not very many autonomous vehicles on the road yet, but it should go without saying that humans are every bit as fallible as computers relying on sensory data, if not more so. And its not like these systems aren't being continuously improved upon. You can design better sensors and programming to account for new variables; you can't tell people "Go get better eyes or faster reaction times". And just as an example, this can easily be compared to the advent of commercial airliners. The first ones rolled out had a really bad track record. Don't beiieve me, look up the "de Havilland Comet". But, here we are today with a massive airline industry, transporting over a billion people each year. Point is, Driveless cars are gonna happen, and their gonna happen soon. People need to accept it and understand that like every innovation, there's a learning curve involved.
youtube AI Harm Incident 2018-03-23T00:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgyBCRn2IMzSnm26o6t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugw_YLyqmyO8jrFbq3t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxTNDuFrvpXaO2J7K54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyT9erG4D0vG72BgEl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzE0Q-sD4etuwQMMod4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgxJYz7fniZehiT1Klp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwYDdnL4FjzfKW8QDp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzfEAy_t5LjQcK1nB14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgyVpmKUOg-LEZ1EzMt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgyCnkacnYHmMddOB454AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]