Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
in reality yes 90 % Robot can not trust it's dangerous in a future .…
ytc_UgwBOSpRl…
G
Can we just not program conscious AI? Honestly there's tons and tons of caution…
ytc_UgjcsyISv…
G
yall if an app steals your art for ai then just post your crappy art…
ytc_UgwZ8fPaG…
G
Unfortunately, I think things aren’t going to stop either once AI art becomes go…
ytc_Ugx1__h9b…
G
We appreciate your question. The physical appearance of robots like Sophia is de…
ytr_UgywjazSM…
G
the AI videos I've seen, shamelessly manipulating the AI shows the best results.…
ytc_UgyW-Cz_f…
G
What is this AI slop? Meta doesn't have 600k H100's, they have compute *equivale…
ytc_Ugzp7T_Ul…
G
Big mistake!!! Google uses their custom made TPUs for Ai stuff, not just Nvidia …
ytc_Ugx7PD9-q…
Comment
Just my opinion, but I think most people look at AI like people looked at Y2K (if you remember that). It was supposed to be a big deal and turned out to be nothing. I think people trust that if we are developing something detrimental to everyone that we will collectively agree to stop. But therein is the issue, as I see it. It won't seem detrimental to everyone. Much like opiates, it will appear to be a miraculous answer to our pain and problems. But we are really the problem; more specifically, our ability to control our desires. We all know how that usually works. People rarely do anything unless they have to. Like the mouse that chooses cocaine over food, we will likely rush to our own destruction. What do people like to do? Eat, feel good, look good, have sex, sleep, relax, have something to do that's not too demanding, something to pass the time. I can do drugs better than ai. That's all I can think of.
youtube
AI Governance
2025-09-28T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzF1E6bqoPueJ0UoeN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFPQ8Rsiak2RizAqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8XBLg-qXUjZeqWD54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzibRfIWc9zkzi17Fd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyN6HcUUhRGj4EwAO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO54jz8wrXnMu_Cbl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxxQF1d-IvRpe56e9l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzsHrgyYaeoSPt0-AZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzuWgM0tpNWWOAkgKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCC8TQ5co1IUay--B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]