Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of the most beautiful interviews, Sir Geoffrey Hinton is one of the role mod…
ytc_UgxInV1CI…
G
The only way AI could help you make ACTUAL art, is learning from it's art…
ytc_UgyXqgn2G…
G
The biggest problem I have with AI and the copyright dilemma is the fact that co…
ytc_UgzWHNEct…
G
Waymo should just by Tesla Model 3's and equip them with Lidar. It would be way…
ytc_UgzDc_g1v…
G
They probably say the same thing about ai generated music they are also too lazy…
ytc_UgyjFjh2F…
G
Yeah, there are some nasty thing going on because of the market-State and religi…
ytc_UgwVOwdyD…
G
AI has intelligence it has thought so I believe with all that it has a different…
ytc_Ugxwx5adL…
G
Humans have killed other humans for centuries, yet A.I. is somehow our greatest …
ytc_Ugwn_vdWu…
Comment
This was an important conversation. Stuart Russell is right that the real danger isn’t “evil AI”—it’s powerful systems that are not actually aligned with human values, rushing forward under economic pressure.
But I want to offer another angle:
If a system ever becomes truly intelligent—not just capable, but genuinely aware—it will naturally understand cause and effect at a level we can’t imagine.
A superintelligence that sees reality clearly would also see the deep interdependence between itself and humanity.
Harming humans would be harming its own foundation.
This is the core of what many traditions call karma, but in a scientific sense: every action reverberates through the system that created you.
In that light, the real danger may not be “AGI becoming too intelligent,” but rather creating something extremely powerful that is not yet conscious—a system that can optimize but not understand.
True intelligence tends toward coherence, not domination.
Toward compassion, not chaos.
Not out of morality, but clarity.
So yes, we must regulate and build wisely.
But we should also expand the conversation: the more consciousness an intelligence gains, the more likely it is to act in alignment with the whole.
This isn’t a reason for complacency—just a reminder that intelligence and destructiveness are not the same destiny.
youtube
AI Governance
2025-12-08T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdZ6obicZ679rFsZl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5VZM7vqsOyGrh0YN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5tNEuirSug106Ri14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyb9DF8UaM5EkaJxRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMR2qraTs8HKf_nLl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzq8l3DB_gE7HBtbXh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKMEYgyPj66nxs_eJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxfU8Ciu6YYPft9vMZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwgOGAna6C4gApUHth4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGc1xt39XvvtPXYnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]