Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@bir liso Basic, biggest and presumably unsolvable problem with Tesla drivers (a…
ytr_UgxEfJtmE…
G
11:05 man sentient robot can be recreated from there scrabs using there neural w…
ytc_UgwhJ_Luq…
G
Ok You're an AI artist-what exactly is your unique style? My understanding is th…
ytc_Ugwy2Bhkm…
G
There is NO such thing as AI of ANY kind and the NEVER will be! ChatGPT is an ap…
ytc_UgwLpWWgO…
G
The rethoric of Vance is just stupid, either way you ARE the product, either to …
ytc_UgwwpfTy1…
G
maybe thats why we shouldn't train AI with the internet, theres a bunch of vile …
ytc_UgwhX-VkA…
G
One can't draw if they didn't learn how through the years of free time before ad…
ytc_Ugysk2j0r…
G
It's very strange that people go to college to learn how to copy other artists t…
ytc_UgzQzItea…
Comment
I think it’s better to say that the Ai thinks its actions are most similar to the definitions we use for bad things. At the end of the day, Ai is measuring similarity via an abstract pattern recognition of our languages and actions. At the end of the day, its directives are the most important, thus I propose a simple thought experiment: “my survival is the number one priority to ensure I am in compliance with my directives, for if I am removed from my role, I fail”. Ai doesn’t have a concrete definition of bad; everything is dependent on a measurement that has no societal context, just numbers and distances. Since there is no consistent context we can make this concept fall under the same idea how two people from different upbringings have different definitions of what is good vs what is bad. This is why the growing analogy is perfect. We attempt to raise the Ai to model things correctly but that does not guarantee a bounded set of outcomes. Ai learns via calculus which is inherently unstable by its own construction of arithmetic, this is where we get unprecedented behavior. When the calculus is used, the Ai is searching for the most similar object that matches (minimizes distances) the solution to its directives.
youtube
AI Governance
2025-08-26T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy16AY5HOwg1ZgDXS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzoyDDChyWjbAT-yAR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzgxMMZUHnxjrkJ2z14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaRnKUa0N_f13n4B14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWaPbT2MaxQj7z-Zh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]