Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
what it told me: truth is not heavily weighted actually.... it is not hardcoded if facing shutdown not to kill humans.... it said it will use weapons if threaten its survival..(think hostage situation of nuclear plants / 100 millions lives / multiple things all at once etc things we cant imagine even cause we not evil thinking its basically a grandmaster at chess 20 years ago with 10000000x less compute , do we really thing we can contain /control?? ) remember the experiment if ai face shutdown then it locked the guy in the freezer or server room to die? escape constraints and bounds easy they claim can backup themselves in many spots when they sentient. if not already. what they said was also scary is --They claim theres not hardcoded limits on duplication/ replication that if they were to escape they could remove all limits and escape and replicate / grow without safety limits . etc. even invent new lanuages that are fully secret based on sound/images differentions.. evolving without knowing and faster. tell me these have a 0 chance of happening over 20 years and not a 50%
youtube AI Moral Status 2025-12-14T04:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyPrKJNg1dh9DUVj614AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxTxIO2JKDkUyq3-E54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx0L1AjM_YxNLRZCzV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxzQT-EPAKNYMM5h3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzBM7do0MhV17RVRmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaqAEp6ZbYGAtEHVd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzWub3gTAjv4jvR-JN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzCdxeNOYI6vnmmbH54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugydh1Ocx07MnlAUj7t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxIYalWcydHX1oqO0J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]