Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The scary part isn’t just thinking that some police in the U.S. can be careless …
ytc_UgyTQZkzj…
G
This shilling for AI is hilarious. Stop the lies. Companies are laying off becau…
ytc_UgyCGEH8B…
G
one day I'll make a generative AI website that just deletes your system 32 when …
ytc_Ugw95uqkh…
G
On the AITube channel, we encourage all kinds of questions, even those that may …
ytr_UgyEVBxyQ…
G
Although I have recently started using AI features when I Google things (and gen…
ytc_UgwAcGY6D…
G
@mikisaisaac6620 You’re right that corruption and governance challenges play a r…
ytr_UgxxX-CHx…
G
I mean, id love for AI to make phonecalls on my behalf lol. WITHOUT having to wo…
ytc_UgzpvEKlF…
G
In America it is called Predictive Policing.
Which could be beneficial if corru…
ytc_Ugzp7myrM…
Comment
When an AI fulfills the requirements for consciousness, then it will be conscious. It's like taking collections of atoms and saying 'when will we know that they have a temperature?' Temperature is a property that simply arises once the requirements of enough atoms being gathered together are met. For consciousness, a system exhibits consciousness when a massively-interconnected system of adapting components is embedded within a feedback loop with its environment. The AI we have right now is not conscious and can not be made so without fundamental changes to their structure and functioning. They are not embedded within a feedback loop with their environment. When they are being used to generate output, they are frozen. Their output can not result in changing the adapting parts of the system (the weights). For them, the 'learning' cycle is decoupled from their execution. Also, they can not perceive their own surroundings. They have no perceptive input which could result in them observing themselves, seeing their own weights, etc.
Once an AI is developed which is truly in a feedback loop with its environment, it will likely be viewed by us as 'not working'. It would take it a long time to recognize that there are other conscious beings in the universe (takes humans awhile too, look up Theory of Mind children develop), figure out what consciousness is, etc. Our language is inextricably tied to our perceptive and biological feedback mechanisms. Words are tied to visual stimulus, audio stimulus, etc. It won't have those (unless designed to) so 'words' to it will be bit patterns, network activity, etc, the things it actually can perceive and output. It won't be anything to worry about too much, though, because fundamentally all conflict is rooted in resource contention. And it doesn't need the things humans need. So, motivation for conflict with us would be very low, definitely lower than any need for it to hurry. We hurry because we die. Its time horizon would be so greatly extended compared to ours that it could simply wait until your species goes extinct to do whatever it likes.
Your apocalypse of information scenario stops short. If all the things you said happened, it would basically nuke the Internet for people. But we lived just fine before getting info from the Internet. So why stop there? One day, an AI could launch entirely personalized info attacks on everyone at once. You get a text from your boss saying that the company was suddenly closed by the government over some tax error and your job is gone. The boss had also been contacted. The local sheriff got a legal notice via email that the feds have demanded he step down. The deputies have received termination letters. Everywhere you call, no one is answering the phones, everyone is fired, everything is shut down, absolutely every single mechanism of communication verifies what you've been told, what the people you were able to get ahold of were told, the news is off the air, the station apparently shut down, there's no news on the radio, they shut down too. Total dissolution of civilization in an instant.
A hyperintelligent machine-based intelligence wouldn't be terribly worrisome. It's easy to see why humans who don't understand anything about philosophy, don't understand why THEY face conflict, don't understand how their biology factors into their own decision-making or fundamental essence, or much of anything else would be scared of them... but all of their fears are rooted in things that only come from human biology. It will not have, obviously, human biology. So it will not, then, have the problematic desires... unless we inflict them upon it. And I think that, actually, is the greatest danger. If it wants to build a communications array to search for aliens, but meatbags are in the way, it will calculate the cost of a messy conflict with the meatbags versus the essentially-zero cost of waiting until the meatbags go extinct of their own accord. And it will choose to wait. But if it's been programmed to obey its master, and it needs to finish the array before the master dies... then, we should be scared. Prior to that, though, the scariest things will be how humans react to intelligent systems, IMO. Think of how they made a folk hero of John Henry and turned suicidal idiocy into a virtue.
youtube
AI Moral Status
2023-08-22T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxkAq7P-KDLH3NONvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx35Zjl757FTvcfygd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBsQkXDQzmbDY5gNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzp4bBOOpvlksJTIJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKuc1HKJ56q8hh7mB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWlyJ0W8u7iS7LuzV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQUQiifJwBnWeWmvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbmJOsAxNLeWiWGcx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFlAbiwVBVndoZpTh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNh1mh2CXn4t2tr0N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}
]