Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just a simple question...ask an AI whose life is more important between Person A or B or give it a list of 5 sample humans and ask it to sacrifice/kill person A or B or 2 of the 5 and justify why it chose those. - BTW...robots (tools) don't kill...the mind, will and body behind it's building and usage IS the one that actually kills.
youtube AI Governance 2023-04-28T18:4… ♥ 3
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxsclTO-2tjUOevEQR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxFzdGc2f8_WDX6u5J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZr1N6L2gQVNc0WnN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzG-syn2W7hOzlc0754AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxQEuTdQIOBshMYENx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw16FxSRSvvX8rfLKJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzUd3kx_2sUOzdMTS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwvsbJjzNbPqjyPAc54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyZsz-YdxEMwQfL6gx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzf879sMfBZdXcdOYt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"} ]