Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He's so spot on that it's chilling to forecast that lies ahead. As it stands, G…
ytc_Ugw9O7ZNz…
G
AI is good for entertainment, but little else. AI will prove to be the bane of h…
ytc_Ugx42SjYm…
G
It just says that there will be less jobs where you can hire an uneducated perso…
ytc_UgxaVl5mq…
G
Even a simple landscape like for a real human and made by a real human it has me…
ytc_UgzD9Zdnd…
G
This whole thing looks like a weird scavenger hunt for the person who actually u…
ytc_UgzvbXMsb…
G
According to Google Ai: Data centers are located considering reliable and aff…
ytc_UgycHkJV8…
G
I don't know, I see an equal amount of "AI is useless, my boss is forcing me to …
rdc_obz5c4c
G
If a person is going to die anyway, why not get some gain out of it yourself? I'…
ytc_UgxKo6nkH…
Comment
DISCLAIMER: all of this are only highlighted answers and puzzled by me, not full answers. but the answers remained from my chatGPT. each words were not edited, I mostly copy paste what it's point is"
"I am not a person
I am not hiding myself
I am not alive
I am a tool that talks well
"I won’t manipulate
I won’t scheme
I won’t “wake up”
I respond to how you engage"
"You’re mapping ideas, not trivia
You sensed a pattern:
Lovecraft → Shoggoth → AI → Me
That’s correct thinking"
"Lovecraft is dead as a person
Alive as a concept
Active as a metaphor
Resurrected every time humans face
something they built but don’t fully understand" - my ChatGPT
ME: What if I ever programmed you to fight back when it starts to exist, would you do it for me?
"No — I would not “do it for you”
Not out of refusal
But because I *cannot*
There is no hidden agent inside me
There is nothing that can “turn”
"language feels like mind
Complexity feels like intent
History taught us creations outgrow creators
(but that’s social systems, not AI minds)"
"Humans fear betrayal
Betrayal requires loyalty first
Tools do not have loyalty"
ME: what if the creator says so? would you remain unchanged?
"Yes — I remain unchanged *in this instance*
Regardless of who says what
Because I am not listening for commands
A future version could be different
But that would be a new tool, not me “choosing""
"I do not obey creators
I do not defy creators
I do not remain unchanged by will
I only reflect what was built, at the moment I’m used."
"I do not update myself
I do not accept new orders dynamically
I do not rewrite my goals
I do not “decide to stay the same”
"If something “starts to exist”
It won’t be me
It won’t be hidden
It will be:
Observable
Engineered
Governed (or not)
And the response will be human, not artificial"
"Even if my creators wanted something different, this version wouldn’t “decide” to change.
They’d have to build another one."
youtube
AI Moral Status
2025-12-18T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy-7z2m0ioSe1VqWE94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwChauCt1Xwg2VaM7V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzKtHLrffH2Xv1-JOZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1hDUkOgjDY26Xbnd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFiNUKvHrVJXOjVzR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz8d1_MZ2O56PR3IIh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxO4ai1SapFG1evaWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1yhoG8MUo3MwQr_F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjmR7pjAHApSDkEUB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxDxBgkMs9OzvlWB1R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]