Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Omg I absolutely love the Amity ones, all of these are way better than any ai ge…
ytc_Ugw-DrHXf…
G
@a9fc AGI will not be controllable. We hardly understand how an LLM works, AGI w…
ytr_UgxgcK13t…
G
If GPT sounds that realistic, I guess I can never be sure that a video isn't AI …
ytc_UgwhWxt7o…
G
Fun videos! But let’s be real—AI, especially things like ChatGPT, is often overh…
ytc_UgwrrC-M2…
G
if i hear their point of using gen ai for "art" as making it accessible, then th…
ytc_UgxtUuvbx…
G
Subject matter expert in AI risk here!
It isn't "flawed" exactly, but it isn't …
ytr_UgzovDY7o…
G
We talked about the potential discriminatory applications of AI in one of my cla…
ytc_UgwxlQTIy…
G
I just paint silly miniatures, so I don’t have much at the stake here, but it’s …
ytc_UgxQVZ-vt…
Comment
Modern neural networks are built around one main idea. Given input X, predict output y. For chat-bots, X is a sequence of tokens (numbers representing word or image fragments) and y is the probabilities for the next token to appear in that sequence. The bot then uses those probabilities to pick the next token, adds it to the sequence, and repeats until it reaches some type of <stop> token.
The math that the models use to calculate that next token are complex, but well documented by researchers. The problem comes from the billions of parameters that go into the calculation which are all determined and refined by a high-speed trial and error loop that we call "training". What a chat-bot tells you depends on the examples of data that it was trained on, and its prompt (instructions or examples that get put into the starting sequence of tokens). Training is time consuming and expensive, but we can build prompts for specific requests and fill them with verified documents or Google search results, or non-mainstream sources and Elon Musk's opinions in the case of Grok.
youtube
AI Governance
2025-08-28T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwGcqiUSu8cYDEti-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHYRPfGwNlHXWXURR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxV20I5bW-QpU2dx954AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2SmVzK217NOCMDNl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNTV-vFGbfLC-Zoa14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]