Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let's unpack this bullshit. Next word guessers which LLMs are is not artificial intelligence it is just statistical model which guesses the next word. And no, this is not artificial intelligence which is different from human intelligence it is just not intelligence. Next word guessers are already not getting that much better with more data. Next word guessers can't solve a simple physics problem which can be solved by STEM inclined ten year old. Next word guessers can't invent new things, only work with stuff humans already solved. Next word guessers aren't even profitable, they create more problems than they solve. It would be quite nice if Next word guessers would work for us and allow us to live a life where we don't have to work, however they aren't even close to that. Second bullshit - simulation theory. Simulation theory is a theory which says we are in some kind of computer simulation. It bases this on the fact that if we had the means we would make interesting simulations and those simulations would create theirs. Meaning there would be millions of simulations already and what are the chances you are not in one? Problem with this type of thinking stems from the fact that you can't actually simulate a universe with similar levels of complexity and this stems from the simple lack of information. For example if you have original universe labeled as ORIGINAL_UNIVERSE it will contain N amount if information, if you want to create a child of such universe let's add it to ORIGINAL_UNIVERSE.children. This child will necessarily have less information than ORIGINAL_UNIVERSE, I would even say several orders of magnitude less. If you need to have millions of children, because you would run million of simulations if you had the chance. But the reality is, for that society, simulations would be only a part of reality a part of the universe, and let's say they would use 1% of the information for these purposes and you can't map informations one to one because then you basically have the identical behavior. It will be probably that you need 1000s of information unit to create one information unit in the child universe. Let's break it down optimistically for this side arguing for simulation. CHILD_UNIVERSE.numberOfInformation = ORIGINAL_UNIVERSE.numberOfInformation/(numberOfUniverses=1mil)/(usedInfos/1/100)(informationUnitConversion=1/1000). We get for every universe a calculation of 100 billion less information. How long can you do this before you run out of information? Not for long. Therefore, bullshit. Every new universe in depth the amount of infos goes down diveded by 100 billion.
youtube AI Governance 2025-09-04T10:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw7F8Lrqr2iMPKS24h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvhxBOGVU8sa7bl2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwmrGghYHaAfMT_zAV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvNlfJkR7hAf0Tci94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYnJUiXFUhI04RLft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqA42u594gY5aJO_t4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy918T8B9KlBlPtAMl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy1lpWMrgghiaVI3Td4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzkH65KPRtQwntbXoJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzIySWR36RJlh58-oN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]