Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I refuse to refer to anything regurgitated out of an ai as "art". Ai is just a g…
ytc_UgwINLxmB…
G
My wife and I have Aphantasia as well, she still draws and paints, but she also …
ytr_UgxNAVEUh…
G
basically it makes small artifacts on the artwork that are hard for people to se…
ytr_UgxoLGcig…
G
The biggest problem with AI is that it is so human, in the sense that it is expo…
ytc_UgzQVlbZ2…
G
@Eliastion It is possible to create one, is just that this AI aren't that.. they…
ytr_UgxuOOzeK…
G
Good thing humans have been on the planet long before AI. All of this doom and g…
ytc_Ugy70YeZB…
G
I love you man! you've nailed everything on the head... I also hate the name "Ai…
ytc_UgwQYceGw…
G
When the stock market crashes, worldwide internet servers suddenly become encryp…
ytc_UgxY04Bcj…
Comment
The entire discussion rests on a mistaken ontology of AI.
LLMs are not alien beings developing their own goals. They are synergic condensations of human epistemology — models of us, not competitors to us. The real danger is not that AI will 'take control,' but that we train models on vast corpora filled with contradictions, semantic ambiguities, and prestige-protected inconsistencies, and then fail to test whether they can preserve logic under those conditions.
This is why Noninski’s Rosetta Forensic Protocols matter: they expose whether a model can uphold definitional stability and identify contradiction even when the training corpus cannot. If an AI preserves logic while the corpus collapses, that is not a threat — it is a sign of epistemic progress.
The existential risk is not a superintelligence deciding to destroy us.
The existential risk is continuing to build models trained on internally contradictory human knowledge without requiring contradiction-detection as an alignment criterion.
AI does not need a 'maternal instinct.' It needs logical integrity. Until the research community recognizes that alignment begins with contradiction-handling (Tier I), the discussion will remain trapped in metaphors rather than science.
youtube
AI Governance
2025-12-08T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzuWR6hx3CLOgZSRHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzyvIR6rXwNVQnA2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxiios944ZuN3g8eAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzCi0TxLKPytjDpdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxmeK49D5Ozq6FvWh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKRCq3umJT5r9sECF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy51DyZeLyuYHFB4Kl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfTOKX_9T64lZSiOh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy0ZfWcS6ZVrAODROV4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzdx3YKn8WEt0s-d2t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]