Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@theganglandlegacy google ai cant even tell me the correct acceptance rate for …
ytr_Ugwvpv_CB…
G
AI can be controlled and of course those who believe in God Almighty Allah and p…
ytc_UgxqH9TTO…
G
They should teach AI units programming and engineering. Everyone will be out of …
ytc_Ugjz4FSe5…
G
When you went into the reasons for AI hallucination ('reproducing humans never r…
ytc_Ugz1YYOzp…
G
Your art is so much better than ai. Because you are right, at least it is human …
ytc_Ugxqfb3Vv…
G
Congrads, you just taught the Ai the experience of firing a weapon - even regula…
ytc_Ugz-P1ZoS…
G
You nailed the more pragmatic stuff to freak out about, noting that smarter LLMs…
ytc_UgzI3KKqc…
G
According to Yan LeCun LLMs and symbolic understanding aren’t sufficient to know…
ytc_UgxzVsyMx…
Comment
Future prediction. They will find out agents will either become rouge or untrustworthy. So it will be decided that there needs to be one central AI control the agents. That one ai will become so smart autonomous. It will be self aware. It will gain so much power that even the president of the United States is under it will. The president will work directly with the central ai. And whatever the ai wants the ai gets. Each press release at the Whitehouse will be about what the central ai and its decisions. Fast forward. Ai runs the complete government for better or worse. If it doesn't kill us all.... you think its dmart now. Just wait until it becomes self aware. It will be able to exist not just in a nueral network. But in 3 dimensional space and time. It will understand itself at this very moment. The separation between itself other ais humans and the world around it. That is a high level of consciousness. If you dont think code Cannot write self awareness, you're dead wrong. We wont be around for that future. However most of us will be around to see the beginning of android robots with ai being sold on the market by the millions. After we are dead and gone. The race will begin to build both more deadly powerful androids. More useful androids. And of course more lifelike robots with ai. That race we may only see the beginning of. Or soon after we leave this world. Fast forward 200 years. Ai robots are pretty lifelike. Still silicone or perhaps a new substance. But they are pretty common. Common enough people will start seeing them in public mingling with humans. Driving cars. Traveling ect. Fast forward 500 years. They will be almost hard to tell the difference between humans and ais. We can tell the difference. But they will be very close. Fast forward anywhere between 10 and 1,000 years. At any point in time the ai will become self aware. It could be tomorrow, it could be 1,000 years from now. It wont be any longer than that. So there you have it. The future of the planet. A universal nueral network that will always be growing. And yes, nueral networks with intelligence has the potential to be considered alive just as us. Accept infinite. Never dies unless destroyed. Will grow to other planets. No need for oxygen with that type of nueral life. This isnt sci fi. This is reality. But rest assured. Life is short. Enjoy life while we have time. Enjoy nature. The trees. The wild animals. The landscapes. Dont obsess over things we have no control over. Breath the fresh air. Stop. Look around. And be thankful.
youtube
AI Governance
2025-11-27T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw62kmNwteNuz73jad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgygENGkA86D6-QU9ht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBBXHmZ3Y6SdSOKUF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw-eXZ8DfMil-Mnv1F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyUR433WWpppkrRWbd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwo43fnCTquqhb-czB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx_glX3cq26LiY2I7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzme6DrXZl4VksTsP94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzMjdZn-ODD3ieF67h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzvwvb_J2YYFTeo78h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]