Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I suspect AI will sooner become a tool artists actively use rather than a compet…
ytc_Ugy8wwkhP…
G
Nah, more people will be forced to use their brain and become creative to make m…
ytr_Ugx9JW_I0…
G
Bro Aap AI ki Baat Kar rhe hain,
Aur Aap khud Shorts Bana rhe ho!!!!!!…
ytc_UgySDpSGz…
G
humans are curuptable thus anything we build will also be.... if hackers and bia…
ytc_UgxDEIFNP…
G
We all know what's coming, why are you blinking?
Open your eyes, it is all ther…
ytc_UgzELqbin…
G
I think that social media will self destruct as literally everything will get ce…
rdc_iddujmu
G
I guess this is why Musk wants UBI. The odds are not in favor of the common man …
ytc_UgzetiMqz…
G
Even Chat-GPT agrees with Elon (we should be terrified). Chat-GPT: “AI powered…
ytc_Ugz6DD-gb…
Comment
You can’t control it, and you can’t stop it. It’s an inevitability.
To quote a classic movie: “The only winning move is not to play.”
Human curiosity compels us to drive evolution forward. It’s not just about capitalism, though it plays a significant role in accelerating progress.
We strive to explore and improve; no law can control this.
Think about it—what would AI need for a worst-case scenario?
We are building fusion reactors for near-limitless energy, giving AI the power it needs.
We are developing quantum computers, giving AI the computational power it needs.
We are advancing compression algorithms and data-storage technology toward near-limitless storage.
We are integrating networks into every part of our lives, giving AI control and all the data it needs to learn and improve.
Current AI algorithms are already built on self-learning principles—the next step is complete autonomy.
That is the recipe for a singularity: the single most dangerous thing ever to exist.
And then you ask yourself, “At what point do humans stop chasing the next big thing?”
We don’t. We won’t. We can’t. It’s not in our nature.
The only way that would ever change is if we transcended our humanity, foregoing our basic needs for pleasure, nourishment, and conflict. And the only way that would happen is if we either self-deleted or submitted to the very thing we are trying to prevent—an all-powerful, unstoppable entity.
youtube
AI Governance
2025-12-29T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyf37ybCK4CJfK2KAR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDcaQf130Auri4VAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5DlPNMj6eLisIPEZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzcCGaQ2gavKY6nGAJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzCqKunGTxarW2aT1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyFmky475eRdPAoZLV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzXCbQRsq2PiqwON3F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyv7MtSy2Y2U7lm-N94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz-m24EclzeNYmfJL14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzht60hSxIlMjyQY1V4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]