Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think it's a matter of "if" but rather "when." Computer developers and researchers saw the basics of some of these dangers 50 years ago. As computer research and development progressed this same community has gradually come to understand the real possibility of creating a self-aware AI. I submit we have arrived at the point in time where humanity's collective hubris has become an existential blind-spot we need to address. Even though I'm possibly a decade or two away from my end of life, I am deeply concerned for those of us with a much longer time left to live who will have to navigate the coming uncertain times. Using the phrase "letting the genie out of the bottle" is an understatement because the vast majority of humanity doesn't see this coming and it will be too late when they do develop their own personal understanding / interpretation of what biological life will have become in relation to and because of the effects of a self conscious AI. This warning needs to be taken seriously by everyone especially including all governments that regulate how we interact with each other. However, I have every expectation it will not be taken seriously and so it really doesn't matter because it is human nature to disobey which means for all intents and purposes we can move this recommendation forward under the assumption a self conscious AI will be created. It's just a matter of time. How much time is anybody's guess but what matters now is what humanity will do when it happens and can't be reversed. This is humanity's ultimate choice and only a few of us know it. Buckle up people because if you were concerned with how society has developed during your lifetime so far, your level of concern will be even more triggered in the coming days whether or not the computer research community heeds this concern.
youtube AI Governance 2023-03-30T17:5… ♥ 5
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwX1xYnIHjHTx9XDyZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYwY5t9BS3npIMR-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzCX5jx9mry1tknUG94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBWVL8bQXTItp50kV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxZvMWxavv5ve7WFuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxR1mryg_4K9SUkUGZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5yMG25in0rar6Hg94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz0ls2EJ26blnHcQD94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyYuERQswURjCQ6MGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxGhWY3iwyrAAINR1N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]