Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if all of the people on that road were using autonomous cars, they would all be maintaining the security distance between them and be able to break all simultaneously one after another because the distance in between every car is making smaller. By doing so the risk of the incident would be diminished at almost 0%. This is indeed an easy explanation of how the entire situation should work; because what if there are just around 10 to 35 autonomous car on the road and all the others aren't, and what if those that aren't using autonomous cars are driving too close to other cars, unfortunately, accidents aren't things that can be predictable, but we might be able to develop an evolved enough system to take the percentage down as far as possible. We should indeed have the power to do so, but maybe not the money, we'll see what the future have for us in terms of cars, I guess that autonomous cars will be taking over in 5-6 years.
youtube AI Harm Incident 2018-01-09T12:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzmTWvkx_16kZxRipB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwU3JKTd7DXea630ch4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5Al8HpdWeQXVA14V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzR-I1zg7Fd24xkO2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTRsoOWGCfnwZIeM14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLGAxBlRBrU6Q9HzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGRQqzWI3R8ooBq5t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzTIXhunuFErqKh5k54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypFlEyseWL_KumX5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMlxjfJsg85RHvnPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]