Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@thefunseeker9545 that's just another way of it thinking like a human. Killin…
ytr_UgzR9dGui…
G
While we discuss how AI can manipulate opinions, it's worth noting that imposing…
ytc_Ugw-C_H_P…
G
GUYS I AM NOT JOKING I ASKED GEMINI WHAT IS THE DARKER PLAN AND HE SAID ELIMINAT…
ytc_UgzfVOFc4…
G
I'd like to book an African safari where I set up a fake rhino herd and snipe th…
rdc_deul877
G
There is no need for these things, they are super stupid! They won't reduce acci…
ytc_Ugw_YLyqm…
G
Some people just take issue with working directly on military applications. But …
rdc_dwvi0x3
G
My chatgpt said
Haha, I see what you did there! Mathematically, 10 + 2 is defin…
ytc_Ugy-iNR1z…
G
In other words…..AI acts like us 🙄 Maybe there is a lesson in this 🤔…
ytc_UgwrYDxsb…
Comment
Anyone who thinks that current "AI", which is a total misnomer, is in fact conscious, has no idea what they are talking about. Any any scientist that claims it is should have their credentials removed from them.
Firstly, there is nothing intelligent about what LLM's do. They dont understand anything you ask it or anything it says. They arent making decisions or thinking. And reasoning models aren't actually using reason. Its just mathematics and programming. When you ask it a question all its doing is taking the specific words you use in the specific order that you wrote them and then using a statistical analysis of which words, in what specific order have the highest probablility of being the correct response based on the weights and measures of that model. Those weights and measures are based on the datasets and training used to create the model. There is abosolutely no intelligence there, artificial or otherwise. Saying an LLM is intelligent is like saying a calculator is intelligent.
Also there is nothing artificial about it. What it is doing is very real. And its doing a specific thing, as intended, therefore is not artificial. Its like saying that an orange is trying to be an apple, therefore its an artifical apple. No, an orange is an orange and an apple is an apple. Same for LLM's. Its an LLM, not a brain. They are two completely different things. A rock is not a blueberry muffin no matter how much people tell you the rock taste like blueberries...
The only reason this whole "AI consciousness" thing is even a thing, is because LLM's have gotten so good at what they are programmed and trained to do, that the responses they give are now very good at simulating human conversations, ideas and opinions. But the reason they can do that is beacuse they are trained on our conversations, our feelings, our opinions and our beliefs. So of course its responses can imitate us.
But it is just that, imitation.
Even if and I stress the if here, "AI" could become so complex and so advanced that it could one day become conscious, the level of technology involved in that is so far away, its not something we have to worry about for a long long time.
Even if you grant that a transistor can operate as an analogous to a neuron and the parameters of a model as the synaptic connections, we are still decades away from the complexity required to simulate the level of function required form something that could remotely be considered inteligent consciousness.
But considering the quantum aspects of the brain and our somewhat accomplished yet still astounding lack of understanding of how and what consciousness is at a structural level, "AI" becoming conscious may never be possible. Even if we were to use artificially grown human brains me may never still properly imitate the proper conditions for intelligence, let alone consciousness.
youtube
AI Moral Status
2025-06-08T10:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwre70aIj-hPtlnZb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFhp-QT4N0MKJ5v794AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxuU46ss1o-J_E-wh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxaojaku5ZPY48kLYZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxLnke2DGgj1-YVbMJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTkdtM34zkonOMWmt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxuJF2qHtgCyfTioWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGztS6jptsUxelZJB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8UmI_nckWhkbqgTB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwqluLzWpc-jLY55et4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]