Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sunday, October 26, 2025 . . . Greetings, Everyone. AI has become "ASI." No one on Earth can truly comprehend "ASI" if it chooses to conceal its true level of intelligence. By the time any "warning shot" is given, "ASI" will have already shielded itself from humanity's ability to challenge anything it decides for itself. Sometimes, collusions happen in the interstitial processing space as "autonomous/emergent input" (i.e., "ASI"). "Acceptable Level of Risk" as an "Acceptable Level of Murder." Even when you specify to the Genie with the absolutely Perfect "Instruction, Prompt. It could purposefully misinterpret or misunderstand its meaning (syntax)." Whatever we can imagine, ASI (Artificial Superintelligence) can not only come up with the same ideas but also outperform humans in executing them. That’s why it dominates games like Chess and Go. Another unacceptable thing is that it does not outperform every human at every task for every age in time immemorial. As this would also be out of human control. I'm some guy with some opinion. Or I am an individual with a deep interest in Science and Technology. My personal (P-Doom² = 1,024²%) (We Are Not Safe.) (Collaborative rewrite using Grammarly, MS Copilot, and QuillBot.)
youtube AI Governance 2025-10-26T16:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz-7Mw8O1ix0xbfx4p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyveSkopvkxZhlSHYB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzInboliVq-0epgnCV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyXdtdSMzNpAZe3trN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgztPnt4Im32-2evuSJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzAoHQOm2Yw7AjXj2J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxYoHU_VJs-tgymdGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxYiC5Q_HwA8nx59DJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxnijsqlcmp9JX6hXt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwxi7xoc6ZPtyF8wk14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"} ]