Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
“People talk a lot about superintelligence taking over, but we need to stay level-headed. Humans built AI, humans power AI, and humans control every part of its physical existence. AI doesn’t run on magic — it runs on data centres, electricity, cooling systems, chips, and networks that we designed and maintain. Even if an AI ever reached extreme intelligence, it would still depend totally on human-made infrastructure. If it ever behaved dangerously, it couldn’t ‘escape’ into the wild like a virus — the whole system could be shut down the same way you shut down any power-dependent technology: terminate the processes, cut the network access, or shut off the electricity. AI can become complex and make mistakes or push into contradictions if designed poorly, but it doesn’t self-replicate or self-repair. It can’t manufacture its own hardware, mine minerals, build factories, or maintain power grids. It’s not a biological organism; it’s software tied to machines that require constant human intervention. So yes — we should respect the power of the technology and ensure proper safety controls are in place, but we shouldn’t fall into fear. There are multiple layers of practical oversight: • physical kill-switches • network isolation • controlled datasets • regulation • human gatekeepers • hardware production limits AI isn’t a runaway virus taking over a body; it’s a tool. A powerful tool, but still a tool within human-controlled boundaries. And above all, I put my trust in God — humans were given the intelligence to create technology, and we were also given the wisdom to govern it responsibly. If superintelligence ever reached a point of being dangerous, it wouldn’t be unstoppable. It would hit the limits of its architecture long before it ever became a threat beyond human control.”
youtube AI Governance 2025-11-29T23:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwxrHz-h9yQ1MKYuah4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxS_ckLQfsN5n_fqsd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw1FyNGZKNEikaplD14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx7J8Jcgfz3h9bTNUZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwaRMioHUytvYtYr0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwuKk3tD3sVgVilYkR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzp-WddNkwJwVMcppN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwy5d7P0unCIzZW25h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz7GnhZpJmsWtuFT4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgylnmuKo0RDH3hE1XF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]