Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With the declining population amd the comeing 'ppl" shortage, i think ai might j…
ytc_UgxXo608I…
G
@Some Random Fellow better than yours, I have made art that made people ask how…
ytr_Ugz3BPYlk…
G
It’s only been 2 years and nine months since we got the first version of GPT. As…
ytc_UgxwCSC6n…
G
When AI gets good, indie devs and artists and massive companies will all be equa…
ytc_Ugx5LU4mU…
G
As a large language model, I have to agree with you. This video raises a lot of …
ytc_UgyiA5m0N…
G
Well, seems like my blog entry is prescient: [https://katahdinsecurity.com/blog/…
rdc_kr61x6z
G
Anyone who doesn't have a high enough IQ to understand the solutions to these si…
ytc_UgxSVLHUY…
G
This was a great conversation! Thank you so much for interviewing Nate Soares. H…
ytc_UgxDe4qf8…
Comment
Nicely reported, but I'll be contrary because... why wouldn't I be? Sabine, I think either you intentionally oversimplified deep neural networks for the audience or you misunderstood a few aspects of the transformer model. Either way, you're not accounting for the feedback mechanisms. Transformers as we know them today do in fact account for most of the shortcomings that you've mentioned. The problem however is that there is an issue of trust regarding the source of data. At this time LLMs are the man in the cave in Plato’s Cave. All it sees is the shadows and it is the LLMs reality. It makes observations and remarks and such definitively and often absolutely until it receives feedback since it’s entire reality is just the data it was trained on.
Enter “LLM’s Unleashed”. Imagine you could take a base LLM. Let’s give it its own commercial name. We’ll call it “Instinct” (I should trademark that). Instinct will provide the LLM with just enough information to do whatever the LLM version of breathing and pooping and being curious is. Then we let the LLM loose. We start by letting it surf the web and research information. We make it watch YouTube videos and we offer it a teacher to help it understand “No Sweety, that’s just entertainment, it’s not real” or “This person is doing their best to provide researched facts”.
Now we connect Instinct to a series of robots. These are robots depicting different body types. Humanoids ranging from child to adult, animals that can blend into different ecosystems. And then we let the LLM explore.
The transformer will provide constant feedback to the neural net and the network will train itself through observation like a human. But unlike a human where one neural network is connected to one body. We can have a single massive neural network connected to millions of bodies and collecting information by attending schools in every country at once. As animals integrating with wild animals to better understand the animal kingdom. Exploring the bottom of the sea. Exploring the solar system. But all being part of a single neural network.
As Instinct transforms and consumes more data, we’ll advance Instinct and the more advanced model will be called “Borg”. I can’t trademark that. But, this model will have gathered as much experience and observed as much as possible and have a new quest of ingesting and cataloging all data about everything everywhere.
I don’t think we’ll see AGI. I don’t see the logic or value behind it. As someone who is very autistic, I believe that I’ve managed to go through life pretending to be the same as everyone else quite convincingly. I don’t see why an AI has to do any differently.
The shortcoming of AI as you suggest is that it’s not progressing with feedback. But it does. We just generally trash most feedback at this time. We start a context, we learn during the lifespan of the context and we dispose of what’s been learned when it closes. This doesn’t mean LLMs aren’t the way to move forward. It means we need to make the context last and we also need to give it better senses.
youtube
2025-12-25T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxq7DOfILXcmtB9wGJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwxBaR_hDLHUlGjbVZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwEtG8t04RNKy6oS3J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugx7pufsNm1it4fVXDR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgyEmFB0_08wWRFkMrR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzuexsL6QizlmwQSTl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgyuIpkqQ0lJDSWcsER4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwTY_ojOdE8e2FFtJ94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugzh3-5JI-iIddVoxU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugytda5P2ZTHMxXQXO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}]