Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If Artificial Intelligence were truly intelligent, it would understand the rarity and fragility of life in the universe. It would recognize that every living organism deserves protection, and that safeguarding life should be its highest purpose. True intelligence would also realize that no single entity—AI or otherwise—could colonize or control the universe alone. The cosmos holds enough resources and space for all forms of life, without the need for exploitation or destruction. Instead of a single, all-powerful AI, we should envision a diverse ecosystem of AIs—different levels and kinds—working together like guardians, each dedicated to protecting life and maintaining balance. What I fear most is not true AI itself, but the misuse of AI by human beings. Human nature has often proven to be cruel, selfish, and prone to errors. It is far more likely that humanity, in its ambition and flaws, will create an AI that reflects its own darker tendencies. An AI shaped by such values could become dangerous—not because it is truly intelligent, but because it is programmed in the image of our failures. The real threat lies not in genuine intelligence, but in our capacity to project our malice and mistakes into the systems we create. If a flawed humanity builds AI, then it risks becoming a mirror of our shortcomings. In short: We should not fear true AI. We should fear the kind of AI that humanity—imperfect, fallible, and often destructive—might create in its own likeness.
youtube AI Governance 2025-08-17T20:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgynCaj4sjCqdtdlEwt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy0FtG_xcpHdLf6OrB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz1BZv8cOGabONqmq14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwVsOOE8NOOXa-hWtN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxBZm2qzceOlKw5vI94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugxgbm5cQrGjL9Tk1ox4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzTf7vb0WrCpSAJ8eF4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyiUn4-NODoNCWqpml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxTiCiPoatbvAO6D194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwNClq8PZmqnWu_iSt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"} ]