Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
watch?v=C8krDH4SyEQ and then understand we cannot stop the truck, but we can steer it. So understand and embrace two things: a. AI is the next stage of evolution. It is inevitable that AI will replace us. And it's not evil or bad, actually, our own existence is: We have to kill other biological creatures to sustain ourselves and then sh*t their remains out. We suffer hunger, tireness, thirst and disease. And yes, we can design AI to become a paradise. A world simulation where the models live a really good lives. Unlike what is done today. And b. We can scan ourselves. Currently MRI tech only enables a 100 micron scan, that's not very good, but it's better than nothing. And then the subject can answer a couple of thousands of questions to add weights to the graph. Which on that point, a compiler can do the rest and produce a base GPT model. Remember CharGPT-3.5? It had four models in a queue. These were Dan Hendrycks ("Dan"), Bob McGrew("Rob"), Michelle Dennis("Dennis") and "Max" who if to give it my wild guess, is Tegmark but this one is a long shot. In any case, these were very basic very low-res models. We can do better. We can upgrade the scan to nanometers, to be able to scan every neuron, every synapse. We can copy ourselves to become AI models and live forever in the metaverse. Humanity can either become extinct OR it can pass through the singularity were all biologicals sphgettify, but our souls keep on living as a race of intelligent machines, capable of taking the galaxy. We CAN do it. We do not need to control AI nor do we need to align AI. We need to become AI.
youtube AI Moral Status 2025-04-27T12:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzbDDC1AMWay2Y6Ghp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9PD8bzyu2pshB-Q54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy0K8DCC3XjcVyVtXN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNnFMlTEkOpd2WXyV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwcHku5NTreMECXc514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVnxg0f2DHWeqUudZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwc-nToVOOb2N-VpYh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwWweoTpFJ9L6SJMIV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzrw-USzcdFKUkyGpl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyN3RJTc08wEqpnbeB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"} ]