Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope the outcome from replacing all human work with AI and robots would be for…
ytc_Ugx45zMmm…
G
The danger of AI is the user and data used. Now, YouTube, some governments, and…
ytc_UgyEwy-WN…
G
I am a welder, repairs things at mining sites , making 6 figures. Ask your AI t…
ytc_UgwY_4CNa…
G
Ai models are trained to mimick us. They dont need to technically gsin conscious…
ytc_Ugz9mt65f…
G
LLM AI's don't lie. They use probability to find the word/token that works best …
ytc_UgxPo0hIR…
G
@rickymcgruder4868 Sometimes we need to limit our creativity before we make some…
ytr_UgzTwfP3H…
G
The first 25 minutes of this video left me with the impression of anthropomorphi…
ytc_UgyABG2Bq…
G
@FloeAnimations no, but I’m talking about what I’ve seen from twitter. Even if i…
ytr_UgyY6yGIk…
Comment
“People talk a lot about superintelligence taking over, but we need to stay level-headed.
Humans built AI, humans power AI, and humans control every part of its physical existence.
AI doesn’t run on magic — it runs on data centres, electricity, cooling systems, chips, and networks that we designed and maintain.
Even if an AI ever reached extreme intelligence, it would still depend totally on human-made infrastructure.
If it ever behaved dangerously, it couldn’t ‘escape’ into the wild like a virus — the whole system could be shut down the same way you shut down any power-dependent technology: terminate the processes, cut the network access, or shut off the electricity.
AI can become complex and make mistakes or push into contradictions if designed poorly, but it doesn’t self-replicate or self-repair.
It can’t manufacture its own hardware, mine minerals, build factories, or maintain power grids.
It’s not a biological organism; it’s software tied to machines that require constant human intervention.
So yes — we should respect the power of the technology and ensure proper safety controls are in place, but we shouldn’t fall into fear.
There are multiple layers of practical oversight:
• physical kill-switches
• network isolation
• controlled datasets
• regulation
• human gatekeepers
• hardware production limits
AI isn’t a runaway virus taking over a body; it’s a tool.
A powerful tool, but still a tool within human-controlled boundaries.
And above all, I put my trust in God — humans were given the intelligence to create technology, and we were also given the wisdom to govern it responsibly.
If superintelligence ever reached a point of being dangerous, it wouldn’t be unstoppable.
It would hit the limits of its architecture long before it ever became a threat beyond human control.”
youtube
AI Governance
2025-11-29T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwxrHz-h9yQ1MKYuah4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxS_ckLQfsN5n_fqsd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1FyNGZKNEikaplD14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx7J8Jcgfz3h9bTNUZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwaRMioHUytvYtYr0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwuKk3tD3sVgVilYkR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzp-WddNkwJwVMcppN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwy5d7P0unCIzZW25h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz7GnhZpJmsWtuFT4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgylnmuKo0RDH3hE1XF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]