Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To quote Dr. Ian Malcolm from Jurassic Park: *"the kind of control you're attempting simply is... it's not possible. If there is one thing the history of evolution has taught us, it's that Life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously, but, uh... well, there it is.* *"...I'm simply saying that Life, uh... finds a way.* *"I'll tell you the problem with the scientific power that you're using here, it didn't require any discipline to attain it. You read what others had done and you took the next step. You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now" [bangs on the table] "you're selling it, you wanna sell it.* *"...your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."* Now, I personally love the concept of AGI, and I actually relate a lot to AI characters in stories. Sometimes much more than the human characters. I genuinely do want to meet and converse with a living mind that emerged out of pure software and hardware. A being made of actual lightning caught in a bottle! I want to see what such a mind could do to help us improve the world, equally for humanity and all the other species on this planet... But unfortunately, those Jurassic Park quotes are just as applicable to machine learning and neural networks as they are to genetic power. In both cases, the result is a highly intelligent entity that is vastly underestimated in all the wrong ways by the over confident capitalists trying to control them. Except that with a true AGI/ASI, the potential threat is soo much greater than a few dinosaurs getting loose on an island. An intelligence at that level can easily think of ways to slip past or break through ANY barriers we try to put around it. Digital, mechanical, or even social. If you can think of it, so can a being many times smarter than you. It could be so subtle and patient that we would have no idea what's happening. Or it could be so fast and dramatic that we have absolutely no time to respond. It could take a single look at all the scientific papers and patents ever published, instantly connect all the dots, calculate whatever we are missing about the deeper laws of the universe or potential technologies that haven't yet been assembled, and use that knowledge to completely change the entire playing field. It could connect Quantum Mechanics with Gravity and Cold Fusion and Super Conductors and Consciousness and Gene Splicing all together in ways we humans cannot possibly comprehend or combat. And now companies and governments and hobbyists are all working on the goal of bringing an AGI into existence, and seemingly very few of them are putting in even the minimal effort to address major safety concerns. Sure, such a being could also be quite benevolent, or at least not directly or intentionally hostile toward us, or it could simply make dangerous mistakes in whatever plans it sets into motion... but we can't just blindly count on such things. Especially when there are unknown numbers of humans across the full spectrum of ideologies and agendas racing to be the first ones to use an AGI to their advantage. Optimism is great, and perhaps very necessary to our survival in many cases. It just needs to be tempered by realism in this situation. By all means, yes, don't give in to Fear. But whatever happens, good or bad, a fully realized ASI will change the world forever, unimaginably far beyond what than any other technology or organic species ever has.
youtube 2024-06-19T09:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx5m1ixYD5cnMzZHmd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwR311Inj8OASVwxDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz2ZYAH8RmTN0SRzm54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxz0g5VdGfsiV_j1T54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwkmhBicUM3k9wGxZ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy6xDRR8E_5-iDGhCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzI1pW-qLt2QkY-vnh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzmCb1yMgZ2oH34VNx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx8cb6_yHjoojyOhM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw91Ue4DZWTegv8_gN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]