Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's funny how the dude in the video posts two pencil drawings of himself and hi…
ytc_UgxWumwVS…
G
What people don't understand about this is that you are surrendering your abilit…
ytc_UgxyRNOcD…
G
Why in gods name would you give a robot any kind of gun..... well it was nice kn…
ytc_UgzvUgKoX…
G
They just keep regurgitating the same headline, no basis or examples of the jobs…
ytc_UgzS7LexN…
G
So good to hear some experts calling for sanity. Some people have practically lo…
ytc_Ugx1g1sq6…
G
Generative "AI" shouldn't be used at all (not even "as a joke") because corrupt …
ytc_UgwdnyPo-…
G
Tbf, its not chatGPT, its the lawyer that too dumb to even do the work. chatGPT …
ytc_UgyCDHFXU…
G
I'm not in the art community but THE ART COMMUNITY CAN'T DIE BC OF AI!? ART IS …
ytc_Ugx4eGCng…
Comment
Yuval is right: Human Alignment is necessary for AI Alignment.
So far in human history we've "learned" to trust by establishing laws to uphold contracts and persecute crimes.
But as we all know, laws are rigid and slow to adapt. They fail to solve human trust entirely.
Our market economies either incentivize us to work by offering corrupting power or they impose quotas and expect results without incentives.
What we need is a revolution in the way we cooperate.
No money. No currency. No trade.
We need a new reward system, invented with technology.
We need to hijack our brains just like social media, to make us addicted to cooperation and work without power as reward and merely social recognition.
youtube
AI Governance
2025-07-22T13:0…
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwBmIud28l_qtSRqT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJ-0xOEbWLoLhSZFV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzL9HzXDKb9ha0i6Rl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgztrpcDLVlnizzWtht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7vzxbFJBL1M3BlBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6M_WEJeFM6BYGLfh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdfryuQSOLLmxPnfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgznzNf6vSFBqlh3F4R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFPPFg-dMfhv7Z8cd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoE1RjmODEQJwnjZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]