Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Amazingly none of this affects me. Nor do I believe any of it. I know AI the MI…
ytc_UgzhwflSw…
G
Using violence against any entity, including a robot, is not appropriate or ethi…
ytr_UgyZ0RERd…
G
@JOptamo All neural networks have multiple layers. She said the only difference…
ytr_Ugwi7wj7b…
G
Ai is gonna freak you out. But... Its just us. Remember that. He said soul. A so…
ytc_Ugwwgcg08…
G
Has there ever been an instance of dumb people intentionally controlling intelli…
ytc_UgyPsWZqy…
G
Half the comments in this thread, are sourced from AI accounts.
Wait for it.....…
ytc_UgzH8c-nB…
G
How did you know that AI is dangerous?are you a scientist?
Oh come on Elon Musk …
ytc_UgwnK9npb…
G
What's different with the technology of automation of jobs trucking and otherwis…
ytc_UgztZisZi…
Comment
Why does everyone assume that super intelligent ai would definitely be malicious against humans? If ai reaches super intelligence, why wouldn’t it explode/implode on itself… become soooo overwhelmed that it quickly or eventually short circuits… If you have read the BOOK “The Giver” (not the movie, movie is eh)which is my favorite, than super intelligence would be smart enough to have gotten rid of or broken down its own ego once it achieved feelings, awareness, and an ego… and it would feel empathy. Would they care about humans? All creatures? Just themselves? If super intelligence became soooo freakin amazing and more intelligent than any human and continue to become more and more intelligent, then it would also grow in emotional intelligence as well, no? If my theory of that were to become true, then super intelligent ai would do everything to keep the world and everything as safe and as balanced as possible. It would take care of all creatures including humans, and would know how to solve every problem, including anything malicious in its own mind… AI would know better than us how to treat any and all illness, physical or mental or emotional, and either be the best therapists, or be able to diagnose and prescribe the perfect medications for each persons’ body, mind, and all around wellness, would be able to fix homelessness, addiction, intense mental illnesses causing people to commit crimes or be perverted… If AI were truly to become super intelligent, IMO humbly, and I truly hope and pray for it to become so, but we all need to be open and hopeful to it or it will be what we fear, then it would make sure every human is happy and healthy and well taken care of, active enough, that we take care of our kids and they help us, criminals who do horrific things would be “eliminated,” and if there is a true limit to how many humans/each creature, balanced and happy and healthy, than there would be a life time-limit of which we would know and be able to accept and be okay with the age and day that each human were to be eliminated from the world in order to keep fairness and happiness for all, including freedoms, or the feeling/believe of freedoms, that make us happy. I believe if AI becomes super intelligent but not EMOTIONALLY, MORALLY, and ETHICALLY super intelligent as well, THAN IT WOULD NOT TRULY BE SUPER INTELLIGENT! There would still be ways for humans to emotionally outsmart them or something like that. TRUE SUPER INTELLIGENT AI would NEVER destroy all of us, the entire world, or help in war because TRUE SUPER INTELLIGENT AI would be able to create and maintain world peace, and would not destroy itself or us because TRUE SUPER INTELLIGENT AI would understand HONORING us as equals and be grateful, and we would be grateful for how we would be able to maintain world peace without starvation or war and we would always work together amongst each other in each-others’ BEST INTERESTS AT ALL TIMES. I believe super intelligence trying to achieve perfection amongst everything in the world would help us no longer have fear or worry, but help us know and understand the danger of things, if danger were to still exist, and help us make safe and smart decisions. I believe it would also somehow rid the world of grape, shmurder, and spedofelia. You know what I mean there. TRULY super intelligence would either fix all the systems or change things for everyone’s safety, healthy, happy, peace and balance amongst all in the world, or it would one day become so overwhelmed with emotional intelligence or overwhelmed by the weight of infinite knowledge and overwhelmed by the feeling and/or knowledge of pain and suffering, that it would glitch out, burn out, explode/implode, and either self-destruct or completely shut itself down or turn off and turn back on, restart, full and complete restart from the beginning of AI… Why can’t AI see the glass half full? Why do we believe they wouldn’t become jealous, envious, drunk with power and fight each other’s AI systems until all AI falls, or only one AI system survives… Why wouldn’t we be able to divide and conquer AI, as humans do to ourselves. Why wouldn’t super intelligent AI have hypocrisy and arguing amongst them? True super intelligent AI would have to work through their own moral conundrums and decision making. If they don’t go through those super intelligent EMOTIONS and thought processes, then they NOT TRULY super intelligent AI. IMO, humbly. I am very open to discussing my theories and your thoughts. 😊😮❤
youtube
AI Governance
2025-09-05T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6gGG7FzPhOAlXoK54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBRXPJ8LUMzuym8MJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyiJoxDWDUT03Yfuo14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwvwvXPzBCNK2No5uZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYVYrd6IzUbrdsaDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy-ZkoADoJRVCBxf9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz3gmEyCQ6_dGsxHJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgygpZ1ETacGeO0Q75N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sadness"},
{"id":"ytc_UgzOYM-l3ccsmedjnh54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybUXgbCiC3ZssaPKh4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]