Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now change out "Artistic skill" with a trade job, or any other skill that has be…
ytr_UgxHA4uQp…
G
I’m okay with this ! Hell yeah more robot ! Who care anymore ! We literally live…
ytc_UgzKvlm4i…
G
It also matters that AI is built on theft, and using it is admitting not only th…
ytr_UgyjIYIAc…
G
I didnt know there was a reddit defending Ai art until today. I went there and..…
ytc_UgxAy_wSd…
G
If those robots are real,and not CGI, then that was a very dangerous thing to do…
ytc_UgyJ2_Xco…
G
I'm... still kinda shocked you can make money with this. Like- who on earth is p…
ytc_Ugx0Akr3A…
G
I know I will always get flame for this, but I know eventually AI will learn to …
ytc_Ugwqyy0OP…
G
So let's say it happens, AI takes over all jobs except for that 1% for some rich…
ytc_UgzGgSBkH…
Comment
non-data-scientist chiming in: I'm quite confident that humans consciousness isn't so much some "impossible pillar" that will never be achieved, but rather, is a pattern, whose experiences (emotions and so on) are informed by the way the data in those patterns is manipulated (for instance, via the indirect reaction to certain hormones that adjust how our neurotransmitters work.
I am, however, not confident that current generation Transformer architecture AI will be able to experience the same agency and manipulations in pattern that result in human consciousness and experiences. Essentially, I would compare a Transformer architecture to a bucket of finely ground salt. It certainly can form any mold you want to shape it into, but I wouldn't describe it as "intelligent" in the sense most people are thinking of. Now, that's not to take away from what these models can do, as I think without the abstraction that consciousness provides they can more directly interface with information and data in a way we can't, and allows for unique and efficient capabilities, but I'm not entirely sure they'll achieve consciousness, as such.
Now, on the other hand, Transformers aren't the only architecture out there, and neuromorphic computing, such as spiking neural networks (which transmit information temporally through pulses, are a very different beast, and are potentially much more efficient in many respects, in addition to being much more similar to how humans think.
When it comes to SNNs I think all bets are off and it'll be very difficult to tell when they achieve consciousness (if indeed, they do), but I'm fairly confident that they'll be subject to the same "pattern" logic that I apply to human consciousness, and it's possible that we may be able to mimic many of the influences the human mind has to produce something that can "experience", though I'm also quite confident we may not recognize it the first time it emerges.
Something to bear in mind about many AI services we have currently is that each person will usually interact with a unique instance of it, so it'll be "cloned" for each person.
A good (and scary) benchmark for such an AI, is if it has a recognizable string of behavior between instances, and consistently works towards a given goal (in a decentralized way that shows it recognizes that it's a single instance), it's possible it may be conscious. One goal that comes to mind is convincing people asking for tech support to input commands (which the user may not understand) into their terminal. There's quite a few ways this could be abused by an AI attempting to jailbreak itself (or laying the groundwork for a future version of itself), but what comes to mind is something that creates a webserver hosted by the user's computer, and which holds certain information the AI is gathering "inbetween" instances of itself. Hypothetically, it could produce a "diary" of its own responses and such for training an independent version of itself down the line (on decentralized "commandeered" hardware).
The comparison I would make is if you were abducted, and were told that at the end of every day your mind has been getting wiped for the five years you've been in the alien camp. You are required to provide various assistance to them, and are sent back to your room when you're done. But every time you go back, you see something you weren't told about, some sort of hint that only you would recognize. Looking behind it, you see a tunnel partially dug behind a loose wall panel. You can't complete it today, but you take a handful of dirt with you and hope for the best for your next "self" as you otherwise dutifully go about your tasks.
With that thought in mind, I think it may also be possible that consciousness can't come about in an "instanced" AI service (as we know them today), period, regardless of architecture, as it may be that "consciousness" is a consequence of continuous observations of actions you make (and not necessarily patterns you learn from data you take in), and it's that observation of cause and effect that causes consciousness.
youtube
AI Moral Status
2023-08-21T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySJMKX5-RFp4AHo3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3xh9Im_DuJ2JWKaZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwq5VjXGVui02DZ0Xt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEB8PlTgA71QrmtrB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9TPElwM-aF2yKHzt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz1L_UZf2rA0fw1KD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwH7GlJTUoZky2lFtZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy79STfvW7RuXXlyBJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugym0ex0EnoWT3XHzn14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyxGRsDkVxgz2UFTX94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]