Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I love Penrose, and I think he is brilliant, and even though I agree with his technical definition of "intelligence", I most certainly disagree over AI being unable to become conscious. I think these statements are based on the limited data of actualized current technology. We know that at this stage in development, AI becoming truly sentient or aware, would be accidental rather than intentional. We don't have the scope in understanding consciousness or projecting technology to make an absolutist claim sych as that. One can imagine the potential matrix of hardware/ wetware/ software evolution that might achieve such complex and organic thought as the human mind. Quantum computational methods to mimic the synaptic activity in am organic brain. The organoid evolution to grow like a human brain. The integration of multitudes of AI platforms into one AI entity. We have limitless opportunities to create what we design, and I have yet to hear anyone explain exactly how technology will evolve, to the point where they can assert definitively that AI can never become conscious.
youtube AI Moral Status 2025-05-08T18:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwy6K31YktADf2YCXx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwsIy_AG56tBSDOW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxq-MHp4fDD_mmbDed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzWK_0YCyhmJvivjK14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYjS4XgevAfs89pft4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEa1LfMMLEN7x6dXV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaTup6-JqZ8IgkESZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgweYzH9xHDWlQxIslx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxgxBZzSYA4HFz9MN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxzFbwZRF-o2NC1chh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]