Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do think it is worth noting the difference between 1000 crashes happening with autopilot on (and how many of them could have easily been avoided had there been human intervention, and the large number of human caused accidents in the same time period. AI will only get better, if they ever add lidar then instantly better. Yet humans as a whole wont get better at driving. Not trying to take teslas side or anything, just pointing out that humans cause accidents too, and with the conjunction of humans intelligence and constant computer processing of AI that is when peak safety will probably occur. Imagine the human doing the steering and AI doing a dynamic cruise control, speeding gets cut instantly (obviously an override with the brake pedal just like normal cruise control).
youtube AI Harm Incident 2024-12-21T15:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy2_1uSyaq8TM70Cyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyKUsP06B9SavJTUC94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_1cocp58SL-YM35h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxfAgU_hlzI1Z57KmF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzl6kUoFMNhzCv8Aqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxrY_Yx1nBCwZPngQt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyP-2BHBGnLuM8-lER4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwpSaFnZvIZDzCQlQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzw7J4GRX4g11I243p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyC49DKDQFe7J4vr3R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]