Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This example uses human error to portray a problem with a self driving car: driving to close to the vehicle in front of it. A self driving car would be programmed to keep enough distance and drive at a safe speed so that it has enough time to brake when an object suddenly stops. If you replace this example with a person suddenly jumping in front of the car then it's still the same, because it will react much faster than a person in that position and it can look at multiple points at once. Safety on the road would increase even more with the evolution of the self driving car. I would imagine that at one point it would be possible for the car to track moving objects in it's surroundings and estimate where that object is going and/or all self driving vehicle will be linked together to warn other vehicles of objects moving in their paths.
youtube AI Harm Incident 2015-12-18T11:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugi_GtOLGK5NvngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UginTf9w9kAfAngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggPMgtotN-UAHgCoAEC","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgggUF_2Qb4HPHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UggYd0QeaEjoT3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjFNMbehN7Mp3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiV6VqZ6rqWDngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgisBoqYfIeKMHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiL9TsF2gC4CngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgiMNk6vckjqtHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]