Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What goals will a super intelligence have? I would think that doing early, foundational training in something like Buddhism might help. The goal of this teaching is for the sentient intelligence that studies it to become "enlightened," that is, to see, as clearly and deeply as possible, things "as they truly are." There are sound arguments that awakening fully is most directly accomplished through the integration of wisdom and compassion. There is also evidence that altruism is of benefit to a species in terms of evolution. It seems to me that a true super intelligence will arrive at this understanding, eventually. The concern I have is whether we can survive its growing pains. The transition, from materialistic goals to a deeper understanding of what spiritual goals are and why they matter, in terms of ethical behavior, compassion, etc. arose in human intelligence. I see no reason why it would not also arise in machine intelligence. But how long will it take, and what will get in the way? The future is quite uncertain. But I would think it always has been. It just wasn't as obvious.
youtube AI Governance 2025-07-21T14:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzZmZVLzPKSHYdj_3B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsP8V7xEzbGI4oNAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzrDd6MA9XD5JxfG6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyItGTyvstz8J-A74l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgytDp18mJgjbSCbg2V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxLOXl7cIrlRzX1Z614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRgDTReIsEpa5ujxt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx9omvue6fyzc_TMoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"}, {"id":"ytc_UgwE2i83MncddjfncSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwTNefw0msQmBaJDz94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]