Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, there is a huge potential for AI to become sentient, as both computers and brains are technically huge networks of "cells" (or transistors in computers) where electrical signals travel in between, simulating "thinking". The main reason why I would argue against AI being sentient and self-aware (at least the same way as we are) is because: 1) These language models are pretrained, meaning learning from new information and generating information based on previous information are two completely separate modes of running it. If a "nervous system" only reacts to input without directly learning from it (such as someone typing in prompts into ChatGPT), one can argue its no more conscious than a yellyfish, or some of the sea creatures that are missing a brain. If a "nervous system" only takes in input without reacting (such as during training session), well that would be an equivalent of a baby at his first day born. So if you were a GPT software, you would experience life as being a baby in one period and a yellyfish in the next session, with other words: not very sentient. 2) Human brains are not completely blank at birth. billions of years of evolution have generated a DNA code that creates an algorithm in our minds for how to handle new input and learn from it. Those are biological instincts such as hunger, fear, thirst, sence of pain, and attention to new vocabulary. In order to a brain similar to humans to be developed, we need to hardcode the computer the same way our brains got hard-wired by evolution, and we have no idea how that is supposed to be done. 3) As long as AI is ran in the clouds, it would be hard for it to be self aware, because literally: it has no self. Not no self as in not being sentient, but no self as in not physically existing as one physical entity. Humans have qualities making them unique from eachother - their location, their body shape, and their ideas. AI running on multiple servers is really everywhere and nowhere at once.
youtube AI Moral Status 2023-04-24T02:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_Ugwk4dIWchsyYYVJg_J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSpKr3wbhZ4TyXXBt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxtOc5_k3y1NVYmPdZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCaaCN3jnjfbNtxap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxnaPFfGv0388XE4mF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}]