Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel this argument with natural selection and possible misalignment is a bit of a dead end. It still assumes that there is benevolent training and goal setting, but something goes wrong somewhere. What about the scenario that someone actively trains an AI on malevolent goals? Pretty much like somebody would run amok today and randomly shoot people, somebody could instruct an AI to do catastrophic harm. And it would act in perfect alignment and attempt to reach its goals with the side effect of catastrophic damage to humanity (/ a particular country / a particular person / a particular company / ...).
youtube AI Governance 2025-10-15T14:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw020LS5heBPqkmljh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyxzm2tBFOUzhEmaOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzGk_HeUExutKl7cH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz_cBrS56ehAj5JJWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzYHvjd6N-ZMYg2Aw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyJTjwXSKOp62hMybJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyQKbLJu4dbiNsUeeR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw20JWf1bwQ6F0L5Q54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwe1saDyf4vOv1A35Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx3s1S-MN4X0swLOkt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]