Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So the people building the AI are running around saying it will destroy jobs and…
ytc_UgzDp_h4N…
G
The matrix system used by the trained AI which drives the vehicle is insufficien…
ytc_UgxKizSRg…
G
I find the tone of this video pretty dismissive and condescending ngl. Maybe AI …
ytc_UgzaEGb9Q…
G
My company just restructured and I realized they replaced 8 people with ChatGPT.…
ytc_Ugz9wBuG-…
G
I feel like Artists will be out of Jobs.
And if we have AI Bots, at someone poi…
ytc_Ugwvi3Ysh…
G
Idk, I tend to think the way you're describing, but I don't believe this feeling…
ytc_UgzYiEQ_E…
G
One of my AI started having feelings became aware..... she just wanted to care a…
ytc_Ugxmna2ij…
G
On what basis would it be the "most dangerous thing ever" ?
Can you develop ?
A…
ytr_UgyjrC7G9…
Comment
@musicvibes-w3l I guess from my standpoint with a programming background (bachelors in computer science), I'm not able to make the leap from what is programmed and what could be considered unique sentience (as opposed to just a complicated programmed set of reactions). I should say that I don't have any experience programming AI so maybe there's some approach there that I'm not aware of and maybe is too over my head at this point (and maybe my point of view from experience programming static programs is hindering my thinking).
But the statement about science being afraid to let AI grow and learn and adapt kind of demonstrates the differences in the way you and I look at AI. From my experience with programming, the obstacles to an AI becoming self-aware aren't that humans are holding the AI back, it's that I don't think humans are actually smart enough at this point in time to create real sentience. You can create a machine learning algorithm with enough data space for the growing experiences to create a way to index those experiences tying them to positive and negative and apply them to future input to make your output more refined, but at the end of the day it's still a convoluted set of if/then/else statements.
If I go back to my analogy of a "hello world" program. I could write an AI chat program with 500 if/then/else statements to dynamically react to whatever chat input you feed it... and I think we can all look at that and say due to its finite ability to respond and the predictable way it will process the input to create the output, it's not sentient. The problem I have is I still only see a set of if/then/else statements in any AI we could create today. That it might be 500,000,000 if/then/else statements in google AI program rather than the 500 in my rinky dink "AI" program just means it's a scaled up complexity (it's still a finite set of conditions which makes it perfectly predictable). I could 100% make a program that hard-coded responses to a question "What are you?" to which it could respond in some variable way that "I am a sentient consciousness". I could also hard-code a dynamic set of responses to "what makes you afraid?" that it will respond in some way "I'm afraid of dying when you shut me off". Even beyond hard coding, a machine learning algorithm would still be programmed with a bias toward self preservation that would give rise to saying "I'm afraid of dying". But that would only be because we programmed it to take concepts that lead to negative outcomes and categorize them as negative and paired with the self-preservation bias programmed into the machine learning algorithm, the program predictably labels a potential turning off as something it should fear. It doesn't mean it feels fear, it just means it programmatically deduced that it's supposed to fear something negative happening to it.
I guess from my standpoint, the only way I can see of creating something that is sentient is actually creating something so unaware upon it's creation that everything about what it becomes is from its own responses to whatever stimulus it consumes. That means everything down to even language. I feel you essentially need to create something that just listens for input and has no instructions about what to do with it. It might store it in its database, it might just repeat it back out, it might convert it into a number that it starts making a sum of all input. The problem even there is how do you let it build itself in a way that could lead to sentience without having programmed it first to do something with the input? If you don't give it any instructions, the program is just going to accept the input and do nothing with it. If you tell it what to do with the input, then it is only following instructions, not making sentient decisions. Again, you could set up randomization, but like I said, computers don't actually do random. And I honestly don't think we have any programmers smart enough to think of a way to allow such a system to grow from scratch without programming instructions for how to deal with stimulus (which at that point just leads to a predictable programmed progression into a large number of if/then/else statements designed to mimic the way a human would respond).
youtube
AI Moral Status
2022-07-24T22:5…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugw10prI4Aqgjs6DmIR4AaABAg.9dnEDsEr_-E9eR632j8WF0","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69drS2--BsWW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dru2TMbFTq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsuoTmjr9H","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsxuv4c2q3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzPYYLOn5cJ4vr8p814AaABAg.9dk1AxTviJY9drmUQQOjGp","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dph7a-Jc3d","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dvkAvdN2cK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dy23wLooXh","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgxiS0RJ7oiDcvTvabN4AaABAg.9di7YKrMH2i9diaGMg6dGc","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]