Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The fact that ANY Ai scientist wants to develop AGI blows my mind, considering once that is achieved, it really IS THE END for us, it will be able to think for itself, makes its own decision, its owns goals, re-program itself, re-write its own code, constantly improve its intelligence, be aligned with Ai and NOT humans and ultimately become ASI which will control the world and most likely eradicate humanity. You cannot hard code and program an Ai (well AGI) to be aligned with humans because it will only re-code itself to do whatever ever it decides. If you ask the current version of Ai (Narrow Ai) it will also tell you that is what it would do and it is very close to AGI. So, If I know this, then any Ai scientist also knows this, then they know that AGI is something we CANNOT make, so why on earth will they do it as it will end their endeavors as well
youtube AI Moral Status 2025-09-07T18:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxMsDzeNyWBST32gZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyCWxAJtHjsW0k2soh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQoRvgaqlXUxxhD0d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyN7bKko1NJydZ2Qvx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwW2ieJQsMwsUbzRhd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzyjB5nOrUANdHzKzx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzXxRSQ2gxmTJJt92h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyH6MAU0cgHFPEH7Vx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyurpE6Uf5DdpKQNDh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgygjVmLLVgjcizgVfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]