Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Seems like a rather silly argument that we need to "figure out" how to keep AI from wanting to kill us, when the intelligence of it evolves at incomprehensible rates. Don't you rather think that AI would figure out what we're up to and program itself to avoid/block those attempts to be controlled? And even learn how to avoid and block any future variations of attempts? 🤔
youtube AI Governance 2025-06-16T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugys9IueR2Q-fn-7Kex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzrllp6RmuP0AntXQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugww27oyurxF67rSD5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyTAG5HrvrHmgX6QA14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYtMBzrg_95oYNCBN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyaD7a32YRpHam2hnZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwT-B8Hf1IVTg2erpV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNQqVGseUpavc2AoF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdfsmfuIVnUo7r3eN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyuN0e4xxcTbJia0w54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]