Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem isn’t Tesla’s technology. The issue is that there isn’t enough training for the drivers. Too many people have become overly reliant on these advanced systems, forgetting that they’re meant to assist, not replace, human attention. It was obvious that stop sign was coming… the driver honestly wasn’t paying attention. Every stop sign comes with a warning ‼️ and especially that one. Even though Tesla’s autopilot and FSD (Full Self-Driving) software have proven capable of handling complex maneuvers — like slowing down and wrapping around corners — drivers still have a responsibility to stay engaged. That means intentionally monitoring the road, keeping hands ready, and making judgment calls always instead of when necessary. Unintentionally playing Russian Roulette if your careless seek thrills that could lead to kills. At the end of the day, everyone behind the wheel is accountable for safety. Elon and Tesla should continue to be fully transparent about the system’s limits and ongoing improvements, so the public understands the balance between innovation and responsibility.
youtube AI Harm Incident 2025-11-05T18:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzqybtvfmINEmu54qZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzSrLBlGqwGfgKASuF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydmFaMtcoyynJjrel4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwRY-BE-sbPhff70rx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgziK_7vEi41F7qefSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwtLP74WuQDIUCTX3Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUh-asrzLoZx9t7s14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyMkE0cxeVGGZS10Xd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxYXRnBLRlJuF2ilB14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwPRKErzB2PeOgiY2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]