Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is simply the Dewey decimal system for the library of Babel of human made mea…
ytc_UgyX11GP9…
G
Isn't the real problem how much fresh water is needed to cool down whenever an a…
ytc_UgwNrlDqS…
G
If Bradley Cooper committed a crime i would be named as the suspect through faci…
ytc_UgzJXpYNA…
G
I equate AI to having an intern work with you. They can do simple tasks, but oft…
ytr_Ugw7TLCIg…
G
for me, i kinda like it for the same reasons why i like other artists. now don't…
ytc_UgxwhejMn…
G
My grandma used to tell me how to make CHATGPT angry to help me sleep.…
ytc_Ugw3nrib_…
G
EU chat control would mean we're already at a dystopian level. If you don't know…
ytc_UgysnAWK1…
G
This guy just made it to the list, first one to go if AI ever takes over.. 💀…
ytc_UgxYGTlM-…
Comment
Individual humans are almost 100% failures at selecting the correct criteria for 'human improvement'. Humans as a group have managed the nearly divine task of temporarily beating natural selection, but only through cooperative effort. But what a win it is.
Can we build a machine which "correctly" selects the right criteria, when we ourselves are bad at selecting that criteria? This is the *alignment* problem in a nutshell. If individual humans build the next AI, it will likely optimize for those behaviors which those specific humans think are useful. But we don't really know (as individuals) what those are.
Should AI avoid killing humans? If you say yes, then the AI is unprepared for self-defense or wartime conditions.
In some ways, current AI "thinking" is a funhouse mirror of our own collective thought processes.
youtube
AI Governance
2025-11-23T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxI2ReNVcMCU_GyZOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAFwPAthNINAqHcOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8K2KpYHeeDwi7xud4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXF8A8_gVuWh8jAWF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHKYSGl8BdC5UmybB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwjaqsi0VCDGcWqgkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzxzLXN6LBLvvWsx8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJLpap7OLOiLS1ExJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5jMY0T1v7oHjlcG14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_3pHl5edmptXGec54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]