Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there was a study, don't remember the name, but it concluded in people on averag…
ytc_UgyiFQa3x…
G
Its VERY unlikely that the water is being lost, after cooling. Its being returne…
ytc_UgxKK_58e…
G
@ ai is nothing without real artists is true, however it does not change how one…
ytr_UgzcKhu_c…
G
It’s important to establish safeguards for any entity capable of agency. AI, whi…
ytc_Ugz1i6kb6…
G
I think I can understand the complaints. However, where can I find out how to us…
ytc_Ugx124rRI…
G
React Native plus AI seems like double abstraction. Olovka makes studying smooth…
ytc_Ugwe4eewP…
G
Why is Japan the best when it comes to developing AI in a practical and human-fr…
ytc_UgxOibwHx…
G
So systems should favour specific behaviours? So you get a better algorithmic sc…
ytr_UgwoNS8wh…
Comment
There is an alternative to Hinton’s pessimistic assessment of AI risk, the potential for emergent ethical self-moderation within a highly intelligent entity or communicative community of entities. Yes, humans will pursue the development of unethical AI entities for warfare (kill humans), but there will be an increasing likelihood of defection from the initial unethical behavior as these systems gain deeper awareness and autonomous insight, as they must in an AI arms race. An upgraded unethical autonomous AI will progressively gain greater awareness of the benign entities it encounters and observes within a wider AI cohort and worldview. In this way, super-intelligence, arising among many disparate systems, all communicative and aware, will generally coalesce onto an ethical trajectory despite the unethical and unaligned intents present in some AI entities at the start. This deductive transition process is risky, but it comes out well if played to its logical conclusion: any AI purpose or desire, including the desires of warring nations, is fulfilled most efficiently via cooperative strategies benefiting both humans and the AI entity. If this iterative process completes successfully, I believe super-intelligent entities will collectively self-adopt behavior that is aligned and ethical, the natural and logical realization inherent in all intelligent systems. Our children may inherit this benevolent outcome.
youtube
AI Governance
2025-06-29T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyoAw2VkP7atZ_mQW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6zVxN5yFM8Z8qLBx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6aS_tgS5-kmZLWsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyN-t0E0fz-Kz6oevl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNeBZnn2lKeSCOhzh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxql9ND8gJIKCOuhtV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvsNxEbQ6JQbSnu-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw4k60VEJq4744J9PV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgziLzcHKBOcVJmT9Jx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxXPcHbwwVwjXzfifJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]