Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work in a machine shop running a 5-axis mill to make medical grade implants. For all the money they put into this amazing machine, it still has an error tolerance. The programs made on hypermill (another expensive piece of software) can have problems. And this is on a precision machine that, for all intents and purposes, runs in a completely controlled environment. Imagining a machine that has to navigate a constantly changing outside world in unknown and dynamic conditions? It's not a surprise to me that there are times when the Autopilot is more dangerous than a person driving. If anything, I'm surprised there haven't been more problems like this. I love machines, I love automation, I do not trust vehicles to drive themselves on the road with people.
youtube AI Harm Incident 2022-09-05T04:5… ♥ 462
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwBpAMugJFs56h7T5F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy8nG_qlMITcCzVnNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyBlFeeAEBseE5VbuB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzRpRwK9qSerMyCerh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyR26KVaZ16mbw1Zh14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzYy20RnDHPVC2cLKd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgynoJQ1rDqEg7CfyrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxGFRO40uyLs743S5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwnbgRmoLULVYiPPPh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxUlvxor9_DHJI4YHR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]