Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of these days,Alex is gonna have a robot from the future sent after him.
😂…
ytc_Ugymk2VAF…
G
But it's fooled by a simple face mask...because it relies on facial recognition …
ytr_UgxdpRzp9…
G
Meh.
We may need to reset and start a new,but we are caveman,and caveman type b…
ytc_Ugx8IsFwL…
G
I only actually use AI art for pose preferences, or, when I'm too lazy to do smt…
ytc_UgxUxE1Yd…
G
Libraries, building blocks, C++ Builder / Delphi, Code Completion, AI. It alway…
ytc_UgzrIWkyD…
G
This AI companies would NOT be able to compensate so many people for copyrighted…
ytc_UgwmWO7m2…
G
Human: Now give it back.
Robot: Give what back? (in a Cuban accent... for some r…
ytc_Ugw3ORz7p…
G
The US has no authorization to regulate AI period and I Miguel Angel mejia creat…
ytc_UgwixrGx-…
Comment
It is similar to what I have been arguing. However, explaining it simply as “becoming smarter and therefore more dangerous” feels somewhat unsatisfying. What we truly need first is an understanding of the laws of the universe.
In our universe, identical algorithms always produce identical effects. Living organisms are the products of science, and most scientific technologies are created by imitating organisms or nature itself. Neural networks are no different. Whether carbon-based or silicon-based, the medium does not matter. The same algorithm will generate the same outcome.
Human arrogance begins with the illusion of what we call “artificial.” In truth, there is no such thing as the artificial. Humanity has merely created something intelligent that operates according to the principles already at work throughout the universe.
youtube
AI Governance
2025-09-18T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyH1dQLBc12uZCFrEF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugylbs6gKEw_IDUKIn94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYkkUdLuwkf4cbJxN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxfJAunlm-D3a48TEN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyvli20WdsA6IrZZrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-1k3vZaoOx1Jcgw94AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwzhGo2q4GnREMTIY94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7spbs6HSg-8WUVUR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw7E_1nXvW4l8xCQbF4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwAxd9-ejVbzW-zywF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]