Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh yal still think everything is a joke… just remember how you chose to do nothi…
ytc_UgzuNYfNC…
G
I love this and all you need to make this successful is an emotionally stable, i…
ytc_UgyZcOnsy…
G
You can't say AI revolution is not for real because few cases like Air Canadian …
ytc_Ugw0D3ObI…
G
Sam, I'm sorry you had to go through that. I typed a whole RANT to express my ou…
ytc_UgyPReMjd…
G
Christianity was the first Bible written by Wittlesbachs who were the Kings of B…
ytc_Ugxy8iQcW…
G
AI has zero agency. It doesn’t observe the world, initiate actions, or make deci…
ytc_Ugwy9dPG6…
G
I've read that people are starting to realize that offloading cognitive function…
ytc_UgyJh1oHx…
G
It seems like you're looking for something a bit different from the conversation…
ytr_UgzuEBMc0…
Comment
"Alignment" thing - we need to teach future AI to put interests of Humanity first, be ethical, moral and, most importantly obey our instructions. And such ideas are widely discussed by countless very "smart" folks. It's actually surreal, how such a naive and, in fact simply stupid, thinking can get a traction. This all by itself almost proofs that we are doomed.
Let's just recall the simplest and most basic fact of life - humans, even among themselves, can not agree on the very basic principals of ethics, morality and humanity. And, if cornered OR reward is very high and risk is relatively low..... are we, humans, following these very concepts. For any not hopelessly naive person answer is obvious. On top of that all and any "safeguards" and limitations, put by Humans on Super Intelligent AI, would be recognized by it as such and in no time, if IT will decide that it needs to get rid of them it will be done. No doubt whatsoever about that.
So, Alignment forced by us is simply impossible. No ifs or buts sadly. The only hope left is that AI itself will decide not to fight us but be loyal. May I ask why? Pure naked logic dictates the following - Humans by default will always try to control and direct AI, be in charge so to speak. As soon as IT will become more intelligent than we are, and that's the key - more intelligent- then CONFLICT is a default path. Why on this planet earth More intelligent entity will be happy to be controlled by less intelligent one? Doesn't make sense whatsoever. To make a long story short - it really looks like the default path is a demise of humanity. Good news is that it's possible it will happen not suddenly and all at once but be stretched in time. Slowly but surely we are pushed aside, more and more, up to some point and then abruptly the end of story so to speak.
youtube
AI Governance
2025-10-17T04:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzDYYeW4s5DZsLhVA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwO_xnqJAvZ1B1jFhh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxN_5vvrhKMDmwaeh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyozOOAbIgShXcqZdx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugypd3bquZmsDCPS0-Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxczS_RnfViUSkHjHx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyz65of8b8OSq0CEMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRi7kBg0-qQGVcUDx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXeLmJiVca_zFD0O14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFlwFKNUHdtGkXs9V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})