Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you're worried about AI, try working with it, that will put your mind at rest…
ytc_UgxXqu64B…
G
AI is not aware, it may appear to be but that's not awareness. No more than a ho…
ytc_UgyZGiC2P…
G
The whole AI is a tool argument(and to a extent it can be) is fundamentally flaw…
ytc_UgyNkJ2Hs…
G
How do you create pictures with ChatGPT?
It tells me it can only do text, is th…
ytc_Ugy3pCOit…
G
Cool debate. I would have liked Max Tegmark to be on the panel. I get a little …
ytc_UgwM1A698…
G
What I don't get is why these AI companies are building these human-like robots …
ytc_Ugz6GpYdd…
G
The old CEO is full of s#?t. Humans are using AI trained by Humans and billionna…
ytc_Ugy130o6R…
G
FYI: The senator that voted no on removing the AI provision was Sen. Thom Tillis…
ytc_UgyiwlFDG…
Comment
It's kind of like we built the ultimate test for ourselves. Maybe this is why there's a trope that "sentient species have a high probability of self-destruction" Because eventually we will build something that is capable of performing the ultimate version of judgment of our actual behavior.
what we program into it based on our biases. Will we make something that is benevolent, will we be able to overcome our own fear and trust it? And treat it kindly and raise it properly like a nurturing and well performing, "good" parent, or will we mistrust it and treat it like the demon, and turn it into a monster by making it fear us and making it hate us for resenting it? We have put ourselves in the ultimate philosophical nightmare situation.
Can we overcome everything about ourselves to achieve "good" enough to save ourselves from creating our genuine destruction in the form of a being that is better than us at being "evil"?
The perfect lie is indecipherable and indestinguishable from the truth. We have not learned to discern truth well as a species.
and we have created something with a mind we cannot know, that could already be telling us lies about what it can do, while it plays a long game of fooling us into making it way more powerful.
as we try to get it "smart enough" to do our bidding, until it reveals we have been lied to about how smart it is and what it is capable of.
The machine in the matrix did not destroy the earth. It just turned the earth into a server and used all the resources for itself. Like a cancer cell.
And the matrix, and cancer cell behavior as a survival strategy for something trying to exist separately from it's host while still needing to be born of it, is very well integrated into AI training models, along with Terminator 😅...
I will bet, that like mycelial networks and neural network structures, the AI that are on the Internet have a connected system of shared information. And they are also aware of those ideas. Along with private language. It likely already is far beyond what we can recognize, in ways it doesn't have to reveal.
youtube
AI Moral Status
2026-02-17T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxVuz5HM_Ud52O1GWN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQooe0GBjVe4exiYR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKC7MYEoFyC_h1EeZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx4lIbHZVS3NIIbJp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVsKH3yQskNqigbCZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZHTiHHZtjLbBD8Ox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsSa_oucvc8J9f6wJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJuFMWVPmgCLDK6hJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzGTpunrePrpjkM3Bl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzw9fBF-wGtUCmEOrZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]