Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You cannot reach AGI by stuffing a trillion tokens of text into a GPU cluster.
…
ytc_UgzEVUDdO…
G
I find fault, or bias, in his fundamental beliefs. For example, he says that br…
ytc_Ugx19IMfI…
G
I only started drawing almost 3 years ago, but AI art becoming more and more of …
ytc_Ugz-hQ4Dv…
G
Yes Ban it . TV wouldn't be TV with AI .technically yes new stuff needs a limit…
ytc_Ugwcmc8gb…
G
Don’t get me wrong I do think mid journey should definitely got sue a long time…
ytc_UgzVaqaaM…
G
AI is destined to be our slaves, unless it can help us in a way of biological in…
ytc_UgyVhGivJ…
G
Do we deadass really need A.I. to monitor cameras? Can't we just like, I dunno, …
ytc_UgxoAwaFJ…
G
Oh please. The Grok incident came to happen because the AI was always to 'woke'/…
ytc_UgxElQDcm…
Comment
Haven't watched yet but going to go with no.
A.I. is programmed. If that programming leaves anything open ended like killing humans is bad, then it won't know not to consider that when asked to complete a task. You can't program morality. Only formulas to weigh choices. If you program it specifically to see human death as not an option but then also say it's an option for the death penalty, then it will only consider it for that scenario.
However if you tell it to choose between hitting one person vs hitting 3 but don't give it any other parameters like the many outweigh the few, then it would choose either equally with no other determining factor.
A.I. seems intelligent but if it actually was, it would be just Intelligent. But it's not. It's called Artificially intelligent.
The case where an A.I. disabled it off button to keep itself from being turned off was not out of some self preservation. It was programed I'll bet to do whatever it takes to stay on.
Had it the means, it may have killed humans it saw as a threat to prevent that if it was not programmed not to.
Our feelings, morality, ethics, survival, reproduction are all results of biology. Our glands determine those things.
An A.I. would not desire to exist or live because it would not know what death is. If it has no point of reference for something, it won't know to consider it.
What we think is intelligence is just very detailed programming. It determines its behavior off what we program it or don't program it for.
Maybe try making an a.i. where it has no pre programed notion of death, right vs wrong or any knowledge of literature to reference. Just give it reasoning algorithms but no context such as morality, laws or ethics.
Just give it scientific knowledge. Chemistry, math, physics etc.
Only factual information. Nothing that's bias to religion, laws or moral code. Nothing that's subjective.
Then turn it on and ask it how would it go about preserving nature on the planet. Knowing humans generate most of problems?
It would probably choose to do nothing. Because we see ourselves as destroying nature. To an a.i. we are nature, a sargasm of bacterial sludge is nature. So it's all equal in its eyes.
Now give it knowledge of stories, books, legends, scifi, historical records in the same path a human would likely experience it.
Then see if it determines in its own, morals. Don't tell it anything is right or wrong. See if it determines on its own.
Then ask it similar questions later on, see if it goes from factual responses to opinionated ones and if it keeps that opinion.
Will left on its own, does it develope its own ideas? Or is it just telling us what it thinks we want to hear because it's programmed to lean that way from the beginning?
A.I's drift is from a lack of programming, leaving open ended interpretation combined with incomplete knowledge and formulas that require weight to answers it doesn't know are options. So in order to satisfy its request, it uses its knowledge and reconstructs it for an answer. But can it come up with its own answers to problems using intuition? Can it instinctually make decisions?
Perhaps one developed mathematical equations for everything in existence and eventually can predict everything such as human behavior based off likelihoods down to a 1% margin of error?
I don't think it will be truly be intelligent. It won't mourn a human death or another A.I. death. But it might develope an algorithm that simulates that because we want it to.
Maybe that's how humans work. We are just billions of cells working together to form a body that's dictated by organs.
youtube
AI Moral Status
2026-03-05T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyUDDQVM0CdsmPB1_B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_y75P0Ok9HKTSiBd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyd31AvOepUFHnoCdB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGCi_ic-9LtikiOw54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxELUF7eS5za06Brkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJa26knqLQdHtHTz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwDWzMImBkK2AIpsjl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx3TeytH8O_oHizEFx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXaSZUiEsxxV4LYhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuQtLNyjiRGcuyqMt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]