Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At about 13:00, you start to touch on the real danger of AI. As computers become more and more powerful and is given tasks related to appropriate behavior of humans, it will, some day, become "artificially" sentient. Self-awareness and independent decision-making based on previously input biases by the humans who programmed it. This grandiose behavior is evident in this very video. Think about this. The very notion that if the United States dropped atomic bombs on a nation would make "The United States" into the most powerful nation on earth FROM THAT POINT FORWARD was a joke from the beginning. In about ten years after the bombing of Japan, other nations had the same capability as the U.S. AI will be no different than the nuclear power "delusion" which has resulted in the world being on the brink of nuclear war on a daily basis. It is most likely that we will make AI self-aware long before we are able to create programs that guide morality in AI because morality does not exist in reality today. In our attempts to teach morality to AI programs, we will give AI a "GOD COMPLEX," AND THAT WILL BE OUR UNDOING. WE WILL MAKE THE MISTAKE DUE TO OUR OWN ARROGANCE.
youtube AI Governance 2025-07-31T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx6KqZ9sJCK-GidiFV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxeKzddpnXhQmzHM0R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugylq8f8Sa7w682rBsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwRxf9RgMSiW8uMA8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFXQKOy0EqcsZfFFp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzVQG_9IVXm4cPHmJl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzrDY0Q6FDsqzWWLr54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyVOk36EcP9_QANDbF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwiNXJzXTW0er8vmcF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwKIXlx7WOZC-Wx2rx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]