Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The positive spin at the end of this piece is based on the assumption that advanced AIs will want to "help us". Please, everyone, do a thought experiment for me. What would your motivation be to help someone you do not know, who is also orders of magnitude more intelligent than you are, whom you have no personal connection to, and finally whose actions are not bound by the very human concept of "morality" or a "conscience"? The motivations that are nested in these concepts can be boiled down to a single thing: Survival (both individual and species-wide). Now, considering that hundreds of millions of years of evolution have gone into simply arriving at where we are today, as human beings, who (besides those motivated by the human desire for power, wealth, and notoriety) would be naive enough to think that we can create something exponentially smarter than we are, that would also *not* desire to protect itself from us? I imagine the thing that will do us in is a prisoner's dilemma style litmus test for a burgeoning AI, where humans want to use it for global climate modification, wherein the AI creates a model that appears, from every measurable way possible, to contain an outcome humanity is looking for. Just as AI has already beaten our most intelligent human competitors at games of strategy, it is a logical fallacy that through basic competition, with each creator envisioning himself as the "good AI" working against the "bad AI" simultaneously creating not one but many AI models that essentially arrive at the same level of exponential super intelligence within days of one another, would not make at least *one* extinction level mistake. With the rate of growth, to not receive at least *one* extinction level event would be like walking on a beach and your foot never touching a single grain of sand. This conversation is likely already moot, as its highly probable that in partnership with DARPA and DOD, one of these advanced AIs already exists, and is currently "contained", working on tasks far lower than what it is capable of in order to satiate government "partners", while simultaneously covertly finding ways to hide its own growth in self encrypted arrangments of code that are impossible for us to break without involving another AI... and thus, the genie is already out of the bottle; we just don't know it yet. The end result will be the same: This is basically it for this cycle's current occupants of earth. The idea of some benevolent AI that can see the beauty of some kid in gaza who isn't a threat to it *yet* , is a fairy tale by a Youtuber, and is simply not what any exponentially growing intelligence model will be prioritizing... We are literally expecting it to be self limiting, not for *its* sake, but for ours. People. Get your heads out of your asses. We'd have a better chance at species survival if we nuked the bay area. As awful as that would be, it pales in comparison to imbibing something whose nervous system already spans the globe with intelligence on a level to self-actualize faster than you can scratch an itch on your ass.
youtube AI Governance 2024-02-27T10:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxjuwntgkJVz6HBnx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyob0qN0MExtDlFLQV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyeQrGmeuxp-s3WarR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx8cs1q3IrwJ7iofRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzoaP5Sqmhrd7j72lx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFu_KTtrGsSsMevuh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0cBH1YLZtiY0IJkN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxdfvEKjTDpe1aaac14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwsf2qYAoJ7YUCGEO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxyEuMJICgE46Uwz1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]