Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not a new "tool". It's is a full replacement. We're cooked. These guys can…
ytr_Ugw7i78w3…
G
So now give the robot a rifle and lets see how that dump truck holds up 😉…
ytc_Ugy-Z0lCr…
G
We can't even get automate customer service in a useful way and they've been wor…
ytc_UgyVcM4v3…
G
It's coming. Every programme designed to "help" physicians including radiologis…
ytc_UgyEl97Pb…
G
Assuming those using the tool are lazy, is the mistake here. There are tons of c…
ytr_UgwSzoM6t…
G
What to do with all the extra free time? Fully dive into human-to-human connecti…
ytc_UgznwtNh7…
G
Haha, love the reference! If only we could ask Sophia for her take on a potentia…
ytr_Ugxy5SWPl…
G
So ai trained on whole of internet is saying out of pocket things.
Truely shock…
ytc_UgzUdqCMP…
Comment
I studied computer science and i have *some* knowlwedge about the process of AI learning for what common folk call generative AI.
It is important to note that except the most basic neural networks to train AI with (perceptrons), every single type of AI, from your groks to a bot that can play pacman, follow a concept of "Hidden Layers", these are basically parts in the neural network where the data is being processed during training or use by the AI, the data could "mean" something that helps the AI accomplish the tasks, but it is practically illegible for human programmers, since the patterns of the data in the hidden layers are more akin to noise,
The doesn't mean we don't know what are they doing, for decades actual computer scientists have created metrics to evaluate not just current AI's but also the "scaffolding" as the example programmer said, we already know that the current family of algorithms LLM's use currently. Transformers, have a certain limit of their capabilities of the task of generating coherent texts. For example there is scientific basis that hallucinations, or "Agentic Misalignments" are inevitable and can only be mitigated not erased.
So, Analyzing large AI models is like analyzing the environment of a city, we can't know for sure 100% of the time what every single human is doing at a particular time and it's consequences, but we still know about civil engineering, ethnography and proper political policy thanks to how many social science's branches measure and observe a city and parse meaningful data from it.
youtube
AI Governance
2025-08-26T18:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgwBrwpZvCM9hKjlZvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyt_yFq4AhRb89AtrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwR1pya6aNEcCgvlqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZ2M5S3HJfxlM9WrJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgycCoiKapaGFWZFvOF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})