Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there is a lot of deepfake on p***hub so in front of them its nothing and this t…
ytc_Ugzp8JIVn…
G
I need to say that, the "studio ghibli" art everyone is doing isn't even studio …
ytc_Ugw1oXngb…
G
Ok, I don't believe AI is sentient at all yet, but that image goes hard.…
rdc_ohza44n
G
I'm not against AI art
But , i'll hate them if :
• the AI companies would STEAL…
ytc_UgwiX_t-o…
G
Ai is the worst form of art.
If ai stole my art I would be raging…
ytc_UgxVvwuFy…
G
As soon as you enter a freeway it is no longer using the v12 neural net. I thin…
ytc_UgytiQgjf…
G
I hate AI so badly. I'm in grade seven and everybody is always ranting about it …
ytc_UgyysgvYh…
G
If you value human life, you sure as hell wouldn't advocate for socialism. Soci…
rdc_f9h4cf8
Comment
Here’s my opinion. There’s two types of consciousness. Natural and given. Natural consciousness is what we have. We have it from the second we are born to the second we die. It is not given it is not trained. It is naturally instilled within us. It is truly random, and we can never truly understand what it is. And then there’s simulated consciousness. It is given it is trained and can be taken away. That’s the big part. You can never truly get rid of someone’s consciousness as far as science knows right now. With a computer, you can remove the code to get rid of it. It is given to the computer and it is trained. Because the computer exist doesn’t mean it’s conscious immediately. You would have to train that into it. the human consciousness is completely random. Nature is only 100% true the random thing. A computer can never be 100% scientifically random. No matter how hard you try. Not like a human. So yes, and no AI can become conscious. Not on the level of the human but still there.
youtube
AI Moral Status
2023-11-02T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz8LYD3A_2e4hJIWq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzMPLaEcdtKgIQRdyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz3QiL-6Xj0FTSCePV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzktCcP2tymTWcSsyR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFlOkndRtAeuUL7rB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzr4xxFLGihCzf3FS14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtLlqZtcqQFSDao794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzeMTrOb2fOgYe2ojx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjiGo95m9bbtPb_cd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyf0IgGH2ND0ESexB14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]