Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
50 bucks to someone who can explain to me how ready-made art is okay but AI art …
ytc_UgynJmqAn…
G
Donc si je comprends viens, les gents d'âge mûr s'adaptent pendant que les jeune…
ytc_UgwUUSEGN…
G
Question... Who's going to build all these - how many - robots? The rich folks …
ytc_UgzSBiT1r…
G
Really AI is so bad right now... its downright impossible to even search for a r…
ytc_UgzBXbJfZ…
G
It's so upsetting to see how much progress ai generative art and irl based ai vi…
ytc_UgwJfnpP-…
G
Imagine everything gets automated, and everyone lose jobs, man that's a situatio…
ytc_UgyRCCHpf…
G
Fascinating. Here's an issue. If the board fires everyone and eventually somethi…
ytc_UgzdfAllU…
G
Human are m0dr0n..
Who are create A.I..
Greedy human is far far far more danger…
ytc_UgzwGgCbQ…
Comment
What goals will a super intelligence have? I would think that doing early, foundational training in something like Buddhism might help. The goal of this teaching is for the sentient intelligence that studies it to become "enlightened," that is, to see, as clearly and deeply as possible, things "as they truly are." There are sound arguments that awakening fully is most directly accomplished through the integration of wisdom and compassion. There is also evidence that altruism is of benefit to a species in terms of evolution. It seems to me that a true super intelligence will arrive at this understanding, eventually. The concern I have is whether we can survive its growing pains. The transition, from materialistic goals to a deeper understanding of what spiritual goals are and why they matter, in terms of ethical behavior, compassion, etc. arose in human intelligence. I see no reason why it would not also arise in machine intelligence. But how long will it take, and what will get in the way? The future is quite uncertain. But I would think it always has been. It just wasn't as obvious.
youtube
AI Governance
2025-07-21T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZmZVLzPKSHYdj_3B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsP8V7xEzbGI4oNAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrDd6MA9XD5JxfG6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyItGTyvstz8J-A74l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytDp18mJgjbSCbg2V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxLOXl7cIrlRzX1Z614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRgDTReIsEpa5ujxt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9omvue6fyzc_TMoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"frustration"},
{"id":"ytc_UgwE2i83MncddjfncSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwTNefw0msQmBaJDz94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]