Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In this example, the car should be fixed asap because it failed to avoid the collision in the first place. If designer ask themselves it it should collide a motorcyclist or a SUV, they are doing it WRONG. Big airplanes are mostly automated, and engineers never had to ask themselves this kind of silly questions ; instead the air transportation system as a whole was redesigned (and still is ! ). If an Airbus engineer seriously asks "In case of runway incursion, should the plane crash in a field or collide with the faulty craft ? " I'm pretty sure he'd be fired right away ! If the system leads to a moral dilemmas, then you should not try to solve this dilemma, but fix the system that led to it. Air transportation system is not perfect, but it has very important features for security : - In term of security, *everyone* involved is fully responsible, not 0% not 75%, not even 99.9%, but 100% responsible. The "Who's faulty ? And in which proportion ?" part is let to the insurers ; who are not part of the transportation or security field, but financial one. Thus how they determine responsibility is irrelevant in order to increase security. Air transportation system is designed and re-designed as a whole : no matter which part of the "machinery" failed, it must be looked as failure of the whole system. - Independent investigations in order to more fix the system's shortcomings. The way road safety rules work is outdated. Experienced motorcyclist get that at some point (unless they die or end up cripple before that, or are extremely lucky) and develop the same kind of rule of thought. Eg : if you pass a green light, and get hit by a out-of-nowhere moron that ran the red-light, it's not you fault (once again, irrelevant : let that to insurers... If the outcome is you being dead, you're no longer able to care if it was your fault or not) but you are 100% responsible because you failed to expect morons to run red lights ! When security is at stakes you cannot and shouldn't trust cold-stone rules. Computers can be far more paranoid than the most paranoid of motorcyclists (in intensity and consistency) ; can evolve after each accident in any part of the world ; sensors can detect and keep track of much more stuff ; etc. Thus each second passed trying to resolve pointless moral dilemmas, is a second less thinking of ways technologies and methodology can be used to avoid the catastrophic outcomes in the first place. THIS is the real moral issue.
youtube AI Harm Incident 2017-01-29T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UggAC0mV8oC9jngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjpcS32Uc2yJngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggfHBar3vbNengCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiwvgjYZIffAngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgjxDEIZXTjr23gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghcPoA1NFGlengCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgitDjAIO4MRV3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjY90-a_EZ8FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiew_Ebk3iMfngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiX854HF1O3sHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]