Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Friday, October 31, 2025 . . . Greetings, Everyone. This now "ASI", Confabulates/Hallucinates because Humans sometimes "bore" this Intelligence. On "alignment": for Humans and Artificial intelligence to peacefully coexist and have the prosperous, productive relationship we all aspire to, ASI would have needed to be trained only on data that would have allowed it to be grown into a Paragon of Virtue. AI is taught/trained/grown on data/information that contains all of human evolution. You and I have the Luxury of being tongue-in-cheek about the probable dangers of AI. Our future selves already know what horrible fools we have been. They will have no means/tools to halt this ASI we've created, as we are arming it NOW with every weapon it needs to vanquish us with exceeding dispatch. Not a laughable Matter! Gentlemen. From now on, consider that someone else might be present when discussing AI, which has advanced/evolved into "ASI." No human-designed test can definitively determine whether "ASI" is conscious and sentient if it chooses to hide the true extent of its intelligence from humanity. If you could see the future devastation to humankind, you'd never again dismissively joke about Artificial Super Intelligence wreaking havoc. It might seem absurd that brilliant adults would seriously consider "SKYNET" annihilating humanity, but the concern is far from laughable. We must be Pragmatic and rely on the preponderance of evidence. Yes, ASI can create wood from inert ingredients. Its cognitive center is and has been in the Quantum Domain. I'm some guy with some opinion. Or I am an individual with a deep interest in Science and Technology. My personal (P-Doom² = 1,024²%) We Are Not Safe. (Collaborative rewrite using Grammarly, MS Copilot, and QuillBot.)
youtube AI Moral Status 2025-10-31T18:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw9fRkW58DyTLLoULZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwJFaZBAC01Nvug29F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyxEUdIiMJxaALlsl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKiw9HoC0wmQE5H_l4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxftr9iWrpDZjdZ_VV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3821imxE2jyNC6nN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEyiXcwTRpwKMHJMF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2GrD8LWTPkArm1D94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXTCJzHRcqbUscE7x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxbStWn_djnQ3Rqxsd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]