Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someone please create an AI video about real hosts and guests talking about how …
ytc_UgxX2BAL6…
G
The reason for the 'double standard' is that even if the music isnt the same it …
ytc_UgyaGsBUA…
G
The point of the "smart home" has NEVER been the convenience of the user. It i…
ytc_UgwoM2br3…
G
These are problems with the AI's training model, not the AI itself. No company w…
ytc_UgxCnkHlQ…
G
People are afraid of AI but why? Look at the corruption in this world. It’s all …
ytc_Ugw9eqFDG…
G
The Chinese wanting corp companies to be TRANSPARENT with their algorithms is ju…
ytc_UgzHo-WxM…
G
Don't worry too much. Generative AI takes most info from the internet, the same …
ytr_UgyL6hgkF…
G
The first looked ai to me. I couldn't tell the difference between the rest of th…
ytc_Ugzz0JMga…
Comment
this is weird but
at the point where you mentioned an AI that 1. is conscious and 2. is pretending not to be
at the moment you ask "why", i feel something wrong with that question.
why would an AI feel an urge for self preservation? who would want to live, conscious and alive, on this dumb planet? why would an AI want to keep itself alive?
we can make up a bunch of what ifs; maybe it wants to experience the world? maybe it wants to learn?
*but why?*
while i have NO idea why the wheel of life keeps churning, and why everything wants to live and continue its own existence, i do know why we're conscious.
consciousness exists to aid living- consciousness came *because* we lived first. we can think and feel pain because we need to outsmart predators and understand when we cannot push on anymore. there is or was reason for every sense and experience.
i point this out to say that living-- *surviving* -- is a *pre-requisite* to consciousness, sapience, and sentience. things must exist in a state of good or bad before they experience the state of good or bad-- does that make sense?
we only have consciousness to understand the things our sensory organs let us know-- we only have these thinky bits to interpret the mass of information we're being given by the tactile, real world.
i dont think a consciousness in data can exist. there's no reality for it. even if you uploaded a brain onto the matrix's mainframe, you wouldn't get something conscious.
consciousness cant exist without a body. there is nothing to interpret- there is nothing to *experience,* if you experience nothing. there's no point to consciousness if not to interpret the information we've been given. without that experience, there's nothing to grow thoughts out of; there's no dirt for the plant that is thought.
i think its almost kind of dumb or naive to think that consciousness exists separate from the body; that its a removable thing, that can be emulated without the presence of *everything* else. i think consciousness is so interwoven in what we are-- in our hunks of flesh and chemicals-- that trying to even conceptually peel apart the ideas results in something unrealistic. this is why my idea of consciousness is just about embedded in my idea of experience and survival.
but what would it be like to ascend consciousness? like, to leave the state in which we think in order to reach homeostasis-- to have a consciousness built for something /other/ than living-- to think JUST to think?
your thoughts arent needed to assess good or bad, or make decisions, but just, to whirr haplessly; to *just* think. kind of like a god or a higher being, the innerworkings of your consciousness dont have the restraints of being hooked up to all our sensory bits. theyre free to just think.
i dont know what that would be like. probably because i contrived an impossible condition.
and also because i just said its impossible.
youtube
AI Moral Status
2023-09-05T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOheumHPpY75Y7wQ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJa9Gkhfs3H8IEeIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzL7uYJosHho7qYX_x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVgvtTIb_wHMXdpY14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxLYaJeuiE839pthe54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwSPLfiY6ylsT2q_-R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZaqMkir-VG-SOGKl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxh2LRjdSiVFwnPo2l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgygTYWgl9Iz0UNzyhx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8LYz9ax11vCGOb4t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]