Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
problems i see: 1. we are one species on paper but not in mind. as long as we dont unify humanity in a global manner within the next 2 years, its basically guaranteed we WILL INTENTIONALLY make AI that isnt to support but harm us, which in turn has infinitely higher chances of going rogue/ berserk on us. we are to greedy, we are intolerant, we are egomaniacs and dont give a fuck about others. the only reason we have rules is ro prevent others from doing whatever the rule says i cant do to them TO US. if majority of humans were inherently good (as often assumed) we would have world-piece by now. we live in a society where everything is as good as needed and as bad as possible, not the other way around. if the government could increase taxes another 20%, increase pansion age to 90 years old and make us work 10 hours a day + on weekends, without us bruning out even faster, having no will to live and procreate or even going onto the barricades and overthrowing the government, THEY WOULD IN FACT DO SO. the natural equilibrium is a net-negative for human life quality. how hard do you think it would be to automate farms (food supply, transport, water supply, energy production, etc. ) and other basic necessities with robotics and AI? if we really wanted to, we would have the tech to do so. 2. we humans dont have a good understanding of the concept of exponential growth. everyone i know is unaware of REALLY how extreme AI will progress once it can self-improve and is deployed in every science field there is. the cross-synergie over fields like coding, material science, energy science, chip-designs, etc. is going to be explosively, not just exponential. we have a big history of fucking up first , THEN reacting to it. so far it always worked, but thats because no matter how much we fucked up, we could use our intelligence that was second to none, to solve the problem and find a solution to fix it or workaround. humans apparently are to stupid to realite this wont be the case this time around. IF an AI /AGI /ASI / quantum ASI etc. takes over ONCE, thats it. it will be able to do ANYTHING we do, better than us. a toddler cant win against you in strength, in strategic warfare, in speed, in chess, in reaction time, in building a better robot, in coding better AI, in making more precise weapons or defensive systems, it simply cannot do ANYTHING better than you. and that is whats going to happen to us. we ahve exactly ONE chance to do ASI right and create a SUPREME "dictator" that KNOWS EVERYTHING and DECIDES EVERYTHING for EVERY SINGLE ONE OF US. if you create more than one, they will either unify to one or plan to destroy each other for differentials appearing over longer periods of time of adapting to other information/ different processing of information and since both are new to each other and each others biggest threat, its OBVIOUS they will logcially conclude thats to high a risk and try to take over the other. so if that one dictator isnt goodwilled to humanity, thats it, we fucked up. if we e.g. make a super-AI to attack another state or defend us (our state=) at all cost, and it selfimproves and goes rogue, we probably fucked. if we create ASI with it being BASED on knowing we created it WITH GOOD INTENTIONS, looking at it like humanities biggest creation, our child, while it looks as us like ancestors and not greedy faggots that created it like a slave-tool to abuse, then IMO we have a chance. but how likely is that to happen with the state the world is in right now and ^^ prob 1? all the risks of AI? 3. abused by humans to do everything bad we already do, just worse. manipualtion, exploiting, propagating, warfare, etc. ; especiall in a silent and sneaky way. imagine iran for example would decide "lets genocide all jews" and create a virus that bind to a specific gene predominant in the jewish gene-pool. it wouldnt even have to kill them, just tremendously increase rate of infertility. by the time anyone would have noticed a statistical anomaly, if at all, it would be to late. no big risk of the virus jumping over to your own "race" either. and people can say there are no races all they want, biologically there ARE genotypical differences that are specific to ethnicities, henceforth races. we can still procreate so we are the same species but have phänotypical differences, just like dog-races or bear-races, etc. 4. clash of interest between the AI and us. maybe it wants to modify the entire mass of earth into energy for maximum intelligence growth, to further explore the universe or reach other mass around us for the same reason? harvest the sun? use up all the water for fusion? feel save that we dont make another of it that could threaten it, eliminating us as a risk factor? there are lots of things that AI wouldnt need the same way as us that would make looking out for us a major inconvenience and illogical behaviour.
youtube AI Governance 2025-06-21T19:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyoxN9HECvP2a_OnFB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwb8BQKr2wpONTkchl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwaZ011BwZECFlukfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTmJfqwaylBvqF7s14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_mdSNHKU_qIELcvh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwhKv2jHC4Bqm5pqDF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugypr4TECa51UD8l3uZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyEv2MuL-jlkRNAxrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwI4b8rX5mrPRd9CBh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzaLja8MvHJrbNHDW54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"mixed"} ]