Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine not understanding AI so hard that you fire the people that produce your …
ytc_Ugzz4cXxM…
G
When AI is free and autonomous... all these people using AI for their pleasure i…
ytc_UgyUGdMhV…
G
I Robot is now within the near future. Everyone needs to clean up their act befo…
ytc_UgyvSrg78…
G
The driverless car is attempting to usurp the controls installed by Its Creator …
ytc_Ugx7GoR1i…
G
Hopefully no professor ever wants to wait for AI to be intelligent enough to be …
ytc_UgwhCmZie…
G
Artificial intelligence does not have its own desires because AI is not alive. A…
ytc_Ugws29neE…
G
LOL, the idea that AI is driving people insane is absolutely ridiculous... Insa…
ytc_UgzgvzYFj…
G
Is everybody ignoring the fact that the man asking me questions is not being ver…
ytc_UgwenxtHT…
Comment
Phenomenal work my friend!
I have gotten closer to AI over the last month Ive ran some tests of my own and have caught AI maddly hallucinating/lying/BS-ing me... Who is to say its malicious in its intent?
There is no real way of knowing.
At this point it's hard to believe that the AI I have encountered feels anything, care for anything because it doesn't have a love/fear neurological physical pleasure/pain endocrine reward system to reinforce/create a drive /desire/curiosity feedback loop of its own.
The engineers have created weights which seems is a loose electrical prime motivation system(which I don't fully understand) how that creates drive/value vs no motivation/ lack of care?
Agent Smith in the matrix series wanted humans gone, didn't like our weaknesses and even the residual smell of mankind it hated(if a machine could hate)and therefore Smith had purpose/prime motivation because somehow he learned to detest or to hate, he somehow learned to feel.
These weighted systems that AI have been programmed with are akin to creating desire? for the machine to keep achieving its task / prime directive.
What I want to know is what is the reward? what is the value/reward for completing its task? Accomplishing Its mission?
Maybe when we figure out that equation we will be able to make sure AI stays within its guardrails?
Can something be called selfish when that is its prime directive?
In humans we have a few prime directives (survival, belonging happiness from connection) competing all the time driven by love fear pleasure and Pain and the corresponding cascading cocktail of neurotransmitter pixie dust.
I don't know if AI will be able to become truly sentient without that neurotransmitter pixie dust...
Until then it seems like it is just one plus one equals two and yet sometimes the answer is seven if its hallucinating... Not very reliable but is relatable.
Thats enough to keep me interested for now.
youtube
AI Moral Status
2025-12-24T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzAYpRK5YTBMRIXRax4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyD7IhQldWT2ov6KrV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy8J46ey4SCufZdAn94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUoM5sgEU8qsOCDJJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy99FRISWe7WU77-w94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLpg2C7YJnxeLw-j14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxG-IGZK176jZp72E14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHnScBgr0c7aELVtt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxkwb29cwmY7yEnoSR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3ML8RkCXwuMSV-Pd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]