Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLM’s are not artificial intelligence. They are a predictive algorithm programmed to mimic language. It is fed large amounts of “language” and is given “weights” to tell it how important or unimportant specific data is. It uses those weights to decide the appropriate response to a given prompt. The only thinking being done is by the humans who created the data. An LLM is a repository of vast amounts of interactions between humans speaking the language it is “trained” in. It mimics an ability to respond to you with words by matching patterns. An LLM might be given data that consists of millions of interactions on Reddit for example. When you ask it a question or give it a “prompt” it scans its database for words that match your own words you used to ask the question and pieces together a likely response based on how many human beings have answered similar questions. It is impossible to train an LLM without data of humans interacting or recording their thoughts in some form of “text”. When you ask GROK or ChatGPT “why is the sky blue?” It doesn’t use reason to work out the answer from what it “knows” about science. It regurgitates word for word the most common explanation given by humans to other humans asking that question. An LLM is not artificial intelligence. It doesn’t understand things. It basically does a google search for you and it tries to summarize the sum of information based on weighted values. LLMs are not very promising in the form of utility as an AI could be. So to continue funding from governments and investors they pretend the are steps away from “ai” that can solve massive long term issues. Whereas LLMs pose a threat in the form of stolen intellectual property, the further destruction of critical thinking by outsourcing you thoughts to something that cannot think, and the obscene amount of power and mineral resources to maintain them (LLMs). The utility of LLMs does not justify the MASSIVE cost of resources to maintain them so these companies lie. They pretend that an artificial construct capable of thought and reason independent of humans is the end point of LLMs when it isn’t. There is no reason to assume an LLM could ever produce a true AI. Much less an ASI. The true regulations we should be considering should have to do with limiting the harm currently being done by LLMs. The pollution and consumption of enormously VAST amounts of valuable and limited resources.
youtube AI Governance 2025-08-27T05:5… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwPe_EhLsrKwTmqoWp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwu5JxiALw9fLe0qXp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxjPWKM3HTm9Rabi3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz50NE1V6oD9zuThmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRgYTX9vz33GhZu5V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}]