Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the biggest and only problem is that they understand 0 and 1 means either they do a thing or not do that thing . if AI evolve itself (offcourse it will that's why we call AI) and if situation like to wipe out whole human species then either they don't or they do , there's no in between case like human . I am unable to explain more here but human should remember "when to stop " 👽👍
youtube AI Moral Status 2021-11-19T18:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz9lD_wPl2uREFbhuR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwnKW9uuZsDMQg2Lux4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznST5hKNyF-hqV_hd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwwB6Bo_Uh4DBAANLV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwBIZ62711CtI5CY4B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwtW1_mIdrGETKnIlF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyhdCUByBfDXB1HkTJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxG3b0Mx-evH6BBAyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxaZLJRDW3IKi30qvN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgygQL0JY3jpkjdt1kR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]