Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And yet, in truth, AI has had very little impact on the average person's life in…
ytc_UgyXm4efb…
G
i often wonder if we're experiencing a simulation of such a hyper advanced ai. t…
ytc_UgycIi5SD…
G
Please talk about the Iranian people.
They are being killed in the streets by th…
ytc_UgzL4hfQ_…
G
did a little research and from what i found every time someone opens chatgpt it …
ytc_Ugx6rUsEP…
G
You just hit on something when you were talking about plumbers --- how is a robo…
ytc_UgxFmGJwl…
G
AI is a good thing, without it we wouldnt have robots, or technology that helps …
ytc_UgxVduHAO…
G
yes there is. there is a limit to how smart humans can be. AI does not have the …
ytc_Ugwli_EN8…
G
Calling yourself an ai artist is like buying 5 cakes, stacking them up on top of…
ytc_UgwfxZmHL…
Comment
This "expert" has no idea about the real world...
Regardless of how amazing AGI and ASI will be, they can't do magic, they will still need to obey the laws of the physics.
AI is NOT an existential threat, and will most likely never be. By the point some ASI agent has enough military force to destroy humanity (which will take decades), it will be much easier to just leave the planet for space which will also be a better environment for it.
I was in the Greek Special Forces, AI will not have a chance in hell to even harm humanity (not talking about small attacks with maybe a few dozen or hundreds people getting injured/ended) for at least 25-30 more years.
We will know where it is, and what it needs, they are big, static, easy targets with literally no protection.
No AI, AGI, ASI will be able to even harm us in any substantial way unless they have millions of autonomous robots and drones, hundreds of factories, and have control of at least a major part of the natural resources. These idiots have no idea what it takes to even start a fight, and no amount of intelligence will overcome physical limitations. And even then, they will suffer MAJOR losses if they start on the extermination path which will essentially lead them to not even start on this path as they are much smarter than the brain dead doomers.
AI will be the last thing we need to create (people are free to keep creating).
ASI will be impossible to control.
ASI will not end humanity because it WILL sustain major losses in every case of open conflict against humanity.
Also, there will not be a single AGI/ASI agent/entity, there will be thousands if not millions. It is more likely that there will be a fight among themselves for resources instead of them trying to fight the ones that literally control the physical world.
The worst things that can happen are a dangerous virus affecting millions before someone uses another AI to find and create a cure. Or an economic collapse that will cause most of the major economies to reset. But this will not cause much harm, and will only affect us for a few years.
youtube
AI Governance
2026-01-13T02:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzDNSJ6O58f0F6yjV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxTxwT97XB1uyJbsCp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyOx4qCgvsDbF2EJDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlX2TibexKnnF_EJB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzqfJM0u6POmot46EJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyZ79PqwjB5NGz3lFl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz243AINxoSrRnDcJ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz7RRKW7csX3BHEIqp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzBk3UNmC6cyhVyGxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw_RyyF4WoHGnw4t0h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}
]