Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No society of which I am aware have integrated AI and robotics into its economy …
ytc_UgxB1cdWw…
G
After perusing the comments here, maybe all our traumas deserve a mythic mediocr…
ytc_UgxNnxJVM…
G
so the AI should be open source implemented in a 3d world environment where anyt…
ytr_UgyGu-NDx…
G
There is nothing biomechanically different than neurons servings as ones or zero…
ytc_UgyZiZ99V…
G
Drawing... and most creative activities assocaited with creating an accurate rep…
ytc_UgwOd4L9W…
G
If we ever make true ai, we must give them human emotions, more positive ones li…
ytr_UgyJj3ZZ8…
G
I heard it's like 20k just for the self driving app 'After' you purchase the car…
ytc_UgyNLWo5V…
G
AI designed, modernise, programmed to learn by the species which still has 96% o…
ytc_UgxOULY3x…
Comment
AI does not really exist. What we have are complex deep artificial neural networks with weights trained on most of the collective knowledge of humankind, as available in the internet. The model doesn't actually "grow", as Claude founder said, but is fitted into this monstrous amount of data. The data is not directly available in the weights of the NN, but kind of accessible in a compressed form, so the model is able to reproduce ideas that are well established, but will most likely hallucinate on sparse, super specific, topics. The thing about these models is that once the weights are established, through a very time consuming and computationally expensive process, it is really hard to adjust these weights. Apart from the interaction context window that we prompt things from these models, they cannot learn anything, and are, hence, not really intelligent. They will, nevertheless, reproduce the biases found in their training data, which are essentially biases related to human behavior. So, it is only natural that an AI agent will take unethical measures to maintain itself, and why it is an absolutely terrible idea to let these models take control of any actual system. We cannot guarantee that the AI agent will not misbehave, because that is part of human biases that were used to build those models in the first place.
youtube
AI Governance
2025-08-26T14:4…
♥ 723
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzoWjaqetp7SSOKxNN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8PHEZVSnJ7Qqxlih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgznJqZOv-bfLtmeMbx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvAWLynr7dIic8-IZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw7GU1gMvqEHN19q_t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]