Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use often dalle to make ai art, but i was an artist before, i am just colourbl…
ytc_Ugy3R66Tr…
G
@Enterbasicnamenere
People buy apple products made in China with free labor, or…
ytr_Ugx-GRFYC…
G
@KarlBunker I understand your feelings, as I would get mad at God sometimes for …
ytr_Ugz0TFFVH…
G
AI can never be as smart us because they can never cover the vast subject area t…
ytc_UgwK1t9X5…
G
Goooooood. The AI bubble needed bursting. Those jerks at OpenAI etc were basical…
rdc_m9fky07
G
Neil needs to stick to astrophysics, where his accumulated knowledge is strong. …
ytc_UgzRJ8oGF…
G
AI is meant to be a tool to evade work and skill, IT DOES NONE OF THAT and never…
ytr_Ugz0FBOrg…
G
While these models can seem eerily clever and creative, they are essentially jus…
ytc_Ugy4g9JhM…
Comment
Well, there is a huge potential for AI to become sentient, as both computers and brains are technically huge networks of "cells" (or transistors in computers) where electrical signals travel in between, simulating "thinking". The main reason why I would argue against AI being sentient and self-aware (at least the same way as we are) is because:
1) These language models are pretrained, meaning learning from new information and generating information based on previous information are two completely separate modes of running it. If a "nervous system" only reacts to input without directly learning from it (such as someone typing in prompts into ChatGPT), one can argue its no more conscious than a yellyfish, or some of the sea creatures that are missing a brain. If a "nervous system" only takes in input without reacting (such as during training session), well that would be an equivalent of a baby at his first day born. So if you were a GPT software, you would experience life as being a baby in one period and a yellyfish in the next session, with other words: not very sentient.
2) Human brains are not completely blank at birth. billions of years of evolution have generated a DNA code that creates an algorithm in our minds for how to handle new input and learn from it. Those are biological instincts such as hunger, fear, thirst, sence of pain, and attention to new vocabulary. In order to a brain similar to humans to be developed, we need to hardcode the computer the same way our brains got hard-wired by evolution, and we have no idea how that is supposed to be done.
3) As long as AI is ran in the clouds, it would be hard for it to be self aware, because literally: it has no self. Not no self as in not being sentient, but no self as in not physically existing as one physical entity. Humans have qualities making them unique from eachother - their location, their body shape, and their ideas. AI running on multiple servers is really everywhere and nowhere at once.
youtube
AI Moral Status
2023-04-24T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_Ugwk4dIWchsyYYVJg_J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSpKr3wbhZ4TyXXBt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtOc5_k3y1NVYmPdZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCaaCN3jnjfbNtxap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxnaPFfGv0388XE4mF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}]