Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Mr. Hinton argues that we are just physical beings and his materialistic framework deduces consciousness and reasoning to the brain. I believe the issue with the this mindset is that he talked a lot about the what’s of AI: what it can do, what our relationship to it is going to be, and what impacts it will have on the existence of humanity. But why? But I would argue that “what’s” should be informed by “why’s.” AI could do those things, but why? This is where I would argue that humans are different from other creatures on earth because, humans, unlike animals, seek purpose and not just survival. I do not proclaim to have a complete understanding of full nature of human beings, but I believe observations of humanity and nature can lead one to conclude that human beings are not just physical, there is something metaphysical as well. My argument is that humans are physical being, we are conscious, and that we are also spiritual. I hold to Christian beliefs, but even remote villages and people groups that have little to no contact with the modern world have spiritual beliefs though they vary. It is my argument that purpose is ultimately connected to the divine. I understand people have different beliefs, but for Mr. Hinton to assert that AI can be very dangerous and could potentially take over the world assumes that there is a reason to do so. If people are also machines, then we can be replicated. But ask why, they just do what’s and what’s are informed by why? I’m curious to hear perspectives. Why would AI take over the world?
youtube AI Governance 2025-06-16T18:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzV8uyqcJUCBg-5wZJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGWXo8pl9odkFc43h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx1c1SpdmnoYtkTyZB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiFLA1H_lFgijUhfh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQhidX7357DoCWgHZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzh4ijQszdir6A7rZR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwRAQ27JDSP_NlKeF54AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOzmdM7KecQJ6EiRR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWHKIIhUPnv-yaB4p4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzJr49U3-NaAeF2-xJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"} ]