Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs can never become a super intelligence. All they do is predict the next word that is likely to come in a sequence, one word at a time consecutively in response to a prompt, based on a combination of programming, and the sum of it's inputs. There are of course other forms of AI that may have other outcomes, but LLMs like Grok, Chat GPT, ect.. will never become a Skynet, or a Matrix like super intelligence, or even achieve sentience, but they do still pose a threat. The LLM Apocalypse will not be sentience lashing out at humanity, but rather the loss of objective shared reality that causes humanity to ignore some other form of impending doom out of a loss of shared coherence. Likely climate change. People will just shrink further and further into their own personalized hyper reality until some danger LLMs helped them ignore the legitimacy of, wipes them all out.
youtube AI Governance 2025-08-26T22:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzWjHmca4_eRvbEviF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwJqnW0aiFxrNg6hZF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugxm3F5eH7tSDbMH1WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2gxwFyk16uvrmZhZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2YBbWRxCoKccSiOJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]