Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In my opinion, AI should be programmed to choose an outcome that results in the least harm if it gets stuck in a lose-lose situation. And the idea of a self driving car that works without human intervention a bad idea, I think a hybrid system of human driver callimg the shots, but the AI doing the nitty gritty details of driving.
youtube AI Harm Incident 2025-02-03T04:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxfOtF47I1ZsgVZQIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZU9lw8m1Gp3d_grx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9uUIfqzEMdc3jGCt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxd_U6lnVrqRTRTVNZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzo4zGIsgNQwpmHtgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyvj_EcltAPO4vuVU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylNjp5OBLzOvd8oRx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3sLKS_rZVlCSdqYl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtnHm6-WDQGfqYR-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwBIg-OlidjNUbf47x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]