Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Tesla are trying to develop safe auto driving technology. This is something no one has done yet and it is likely to save many lives as it becomes better and better. It is absolutely awful that motor cyclists were killed, but to be fair how many other people were not killed as a result of autopilot? Further, at this stage in the development of the technology drivers are supposed to monitor what the vehicle is doing. Why did the driver fail to stop the vehicle? I don’t think the presenter of this video has the expertise to legitimately criticise the technological choices Tesla is making. Also no criticism of other car brand driver assistance systems. For example, I drove a BMW i3 with radar adaptive cruise control behind a slow car in front on a freeway. I turned off the freeway and the radar said, no car in front, accelerate to freeway speed! I found myself struggling to control the vehicle which was accelerating at full EV wack into the off ramp. An unsafe radar based driver assistance system! I suspect that all driver assistance systems on average help prevent accidents, but sometimes they can cause accidents. Let’s hope these AI driving systems can be improved quickly.
youtube AI Harm Incident 2022-09-25T09:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxXVkeLc73exKwsnlB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugyu0FZNCfrbF-KwDKd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzF0u64lGT56NL13LN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw5bVbyOPWJ9RZ6jLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZnoHi1HspvpD2gZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyPndd7uyLJPKqe2ER4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwMWvk4rXhKTaOH7gp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMHxJ1BUzHabVI-k54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwqH2p6mCR22Qxf7794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxJBGD_pPEExd3J8rN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"fear"} ]