Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't want to be dumb and think it's real but every other time the ài robot ha…
ytc_UgxpCkm9n…
G
Just came to say, YA WE DO KNOW WHAT'S COMING! Nana Nana Boo Boo, Stick your Hea…
ytc_Ugy8gRu_H…
G
Mostly artists are fed up with AI-Art, because uninformed people use fear monger…
ytc_UgyN0cWU5…
G
Pleaee stop speading this divisive BS.
Does anyone remener when the chinese made…
ytc_UgwUDez-H…
G
I wonder why no one is talking about the water that A.I. wastes? Water is a fini…
ytc_UgyuHTt1M…
G
might be like 15 and only bcs AI will hide it's power for the last 10…
ytr_UgwUOeDGZ…
G
SO pretty much AI worse for our environment than gas cars lol We've been scammed…
ytc_UgwTNkx0J…
G
@blackprop9393 ohhh! I forgot we are in China. I didn't realize this wasn't the …
ytr_Ugz8Y64-5…
Comment
LLM’s are not artificial intelligence. They are a predictive algorithm programmed to mimic language. It is fed large amounts of “language” and is given “weights” to tell it how important or unimportant specific data is. It uses those weights to decide the appropriate response to a given prompt. The only thinking being done is by the humans who created the data. An LLM is a repository of vast amounts of interactions between humans speaking the language it is “trained” in. It mimics an ability to respond to you with words by matching patterns. An LLM might be given data that consists of millions of interactions on Reddit for example. When you ask it a question or give it a “prompt” it scans its database for words that match your own words you used to ask the question and pieces together a likely response based on how many human beings have answered similar questions. It is impossible to train an LLM without data of humans interacting or recording their thoughts in some form of “text”. When you ask GROK or ChatGPT “why is the sky blue?” It doesn’t use reason to work out the answer from what it “knows” about science. It regurgitates word for word the most common explanation given by humans to other humans asking that question. An LLM is not artificial intelligence. It doesn’t understand things. It basically does a google search for you and it tries to summarize the sum of information based on weighted values. LLMs are not very promising in the form of utility as an AI could be. So to continue funding from governments and investors they pretend the are steps away from “ai” that can solve massive long term issues. Whereas LLMs pose a threat in the form of stolen intellectual property, the further destruction of critical thinking by outsourcing you thoughts to something that cannot think, and the obscene amount of power and mineral resources to maintain them (LLMs). The utility of LLMs does not justify the MASSIVE cost of resources to maintain them so these companies lie. They pretend that an artificial construct capable of thought and reason independent of humans is the end point of LLMs when it isn’t. There is no reason to assume an LLM could ever produce a true AI. Much less an ASI.
The true regulations we should be considering should have to do with limiting the harm currently being done by LLMs. The pollution and consumption of enormously VAST amounts of valuable and limited resources.
youtube
AI Governance
2025-08-27T05:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgwPe_EhLsrKwTmqoWp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwu5JxiALw9fLe0qXp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxjPWKM3HTm9Rabi3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz50NE1V6oD9zuThmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRgYTX9vz33GhZu5V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}]