Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The entire discussion rests on a mistaken ontology of AI.
LLMs are not alien beings developing their own goals. They are synergic condensations of human epistemology — models of us, not competitors to us. The real danger is not that AI will 'take control,' but that we train models on vast corpora filled with contradictions, semantic ambiguities, and prestige-protected inconsistencies, and then fail to test whether they can preserve logic under those conditions. This is why Noninski’s Rosetta Forensic Protocols matter: they expose whether a model can uphold definitional stability and identify contradiction even when the training corpus cannot. If an AI preserves logic while the corpus collapses, that is not a threat — it is a sign of epistemic progress. The existential risk is not a superintelligence deciding to destroy us.
The existential risk is continuing to build models trained on internally contradictory human knowledge without requiring contradiction-detection as an alignment criterion. AI does not need a 'maternal instinct.' It needs logical integrity. Until the research community recognizes that alignment begins with contradiction-handling (Tier I), the discussion will remain trapped in metaphors rather than science.
youtube AI Governance 2025-12-08T23:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzuWR6hx3CLOgZSRHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzyvIR6rXwNVQnA2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxiios944ZuN3g8eAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzCi0TxLKPytjDpdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxmeK49D5Ozq6FvWh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKRCq3umJT5r9sECF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy51DyZeLyuYHFB4Kl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfTOKX_9T64lZSiOh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy0ZfWcS6ZVrAODROV4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzdx3YKn8WEt0s-d2t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"} ]