Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
👀If humans can't tell that it is a robot, they are a lot more brain damaged than…
ytc_UgzWcEazC…
G
There is a guy in my school that is “writing a book” with ChatGPT. So he isn’t w…
ytc_UgxZx5ivZ…
G
The flaw in this argument is, Who are the consumers? You only have a Resource C…
ytc_Ugwv8B4BZ…
G
It’s true most of the time ai is useless to me it’ll give me the complete wrong …
ytc_UgwH9MqKh…
G
Tesla is going to wipe out wayno.....
I don't know why I can't drive full self d…
ytc_UgwPxtrGv…
G
Haha, it does sound like she's got a diplomatic way of answering! Sophia really …
ytr_UgwLyzgs2…
G
It's not just AI it's the fact that programs like Canva make anyone think they c…
ytc_UgwvekIh-…
G
If it's shot for shot, then show the shot for shot
But knowing how AI works, I …
ytc_Ugy9QjLcy…
Comment
LLM AI's don't lie. They use probability to find the word/token that works best in the context of your prompt(s) and what it has already written. It's basically a huge database in a hyperdimensional plane with each dimension representing different contexts words relate to each other. Closer together means greater probability of coming up in a sentence. The contrapositive is also true. These LLM's are really good at communicating since they learn the best way to communicate and have generalized over a lot of words and a lot of contexts (Over 12000 dimensions; try and visualize that spatially).
I study Computer Science and to me it's really important to de-abstract the different jargon, because they often misrepresent what the models actually do. Same goes for "hallucinations". It gives misaligned abstractions of what the technology actually does. I think for the general people to be more enlightened we should be better at talking about it as a piece of technology rather than a living creature.
I get that we want to use human terms to describe the models, because if we looked at what it does as if it was a human being, it would be lying. But pick it apart a bit. It's finding the style of combinations of words that it has previously been more successful with. You cannot lie without intent. If I tell something that isn't true, but I believe it to be, am I lying? Take away the belief, am I then lying? I do not believe so, but I could be lying...
I think the most abstract way I would explain the way AI LLM's work is: It's AI that imitates our perception of what AGI (Artificial General Intelligence) is and that is what we're training it to do. Whether that will lead us to true AGI is not something any expert knows or have proven.
youtube
AI Governance
2025-11-26T21:1…
♥ 33
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPo0hIRTQ921Jnled4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWn8b4A-EuABGyNtF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwnZGMbZUNe0u8S-nR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyhMfYyGyYnU31qgN14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwXNLVUBTKgKhC_aSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0gkY9YKs4-WClrbF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcDfCD4b3wfcrWK_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgCU7bY8hnLnIUD8t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZTecmqJLoPT5ORGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxobctoSh9O1yYWn6l4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]