Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One more person who doesnt understand that an AI assistent is not meant to repla…
ytc_UgysfQOMS…
G
Scientists predict by the end of the century that 50% of land animals will go ex…
rdc_deghiwp
G
We appreciate your enthusiasm! Sophia's insights are indeed fascinating. Remembe…
ytr_UgxCxPM_5…
G
AI can’t even randomly selecta map in a game. It will choose the same map 3-5 ti…
ytc_UgxkT6JpG…
G
the difference between a medicine and a poison is often just dosage .. it's mayb…
ytc_UgxeE75UG…
G
"Form of collage" that's just not true. That argument started on social media an…
ytc_UgyQCA9Uf…
G
@foxyauroraart9916 People who steal art are not artists. That is like claiming m…
ytr_UgzfEDFXv…
G
I design templates for card games as a freelancer. I suspect that AI could be ta…
ytc_UgxmHn-1x…
Comment
The problem the "hard coded" response brings me to is that I don't see any way that you can create a truly self-aware AI without letting the system build itself. If AI is the product of code that a team wrote, there is a directly predictable manner in which the AI will react to stimuli or questions. Even if your approach to AI is for it to react randomly to stimulus... computers don't actually do random, random number generators just use a complex formula that translates the system's time in microseconds to a number in the range your code has specified... so if you know the code you wrote into the AI, and you know algorithm used to generate the random number, and you know the microsecond that the AI will process the stimulus, you will predict with perfect accuracy the AI's response.
Even if you come up with some method that allows the AI to rewrite its own code... it's still going to do that in a predictable manner. Even if you start with a simple software seed that lets the software build its own code as it consumes input and makes decisions about that input... it's still doing that in a predictable manner, and if it does write its own code like some kind of emergent growing software, it's still writing that code in a predictable manner (so it's not really emergent, it's just a complex program that can add to itself). So really I struggle to imagine that a truly sentient AI is even possible.
But the fact that they are able to hard-code a response to a question about what the AI considers itself to be tells me that what they started with when they first allowed the AI to interact (or turned it on) was a fully built piece of software that was already capable of identifying the sense of self through what was programmed in order for them to be able to hard-code its response to be that it is an AI. That's not sentience... it may be so excruciatingly complex and dynamic that it's difficult to know the difference. But at the end of the day it's simply a more complex version of writing a hello world program from a middle school programming class and expanding on it a little to make a loop that allows user text input your program will evaluate through several if/then/else statements (or case statements) to spit out a response to whatever you type.
youtube
AI Moral Status
2022-07-24T09:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugw10prI4Aqgjs6DmIR4AaABAg.9dnEDsEr_-E9eR632j8WF0","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69drS2--BsWW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dru2TMbFTq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsuoTmjr9H","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw4S5swmUTtw2G0akV4AaABAg.9dn68wqYY_69dsxuv4c2q3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzPYYLOn5cJ4vr8p814AaABAg.9dk1AxTviJY9drmUQQOjGp","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dph7a-Jc3d","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dvkAvdN2cK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwTTliKNEjxRy2Cj_14AaABAg.9diLCBw4au79dy23wLooXh","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgxiS0RJ7oiDcvTvabN4AaABAg.9di7YKrMH2i9diaGMg6dGc","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]