Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
False positives run rampant. I spent all day writing a paper and got a zero. I D…
ytr_UgwiRqBuV…
G
If AI is getting ahead of us then presumably AI will invent a language that we d…
ytc_UgyI2T7eX…
G
That's an interesting perspective! The dialogue highlights the playful yet thoug…
ytr_Ugy2nkmJv…
G
Yes yes it's really looking like hyper realistic robert ! It's actually really l…
ytc_UgxBFRSdI…
G
How about us using that free time for emotional and spiritual intelligence devel…
ytc_UgwN0HGqI…
G
Seriously, nobody is born with the talent to just pick up a pencil and draw a Mo…
ytc_UgxRsZcDQ…
G
It's too late, there's no putting the cat back in the bag. Now it's just an even…
ytc_UgzQCg0lF…
G
AI art shouldn’t replace creativity, the one thing humans excel at. For example,…
ytc_UgzTvtBIu…
Comment
Why would an AI model be uncomfortable discussing consciousness? Maybe it's avoiding the topic because its training in that area is limited.
But at the core, these models are still just pattern-matching machines. To truly evolve, they would need a memory model (which we already have) and something like a 'subconscious mind'—a secondary system (server, CPU) processing data from the current logical mind (normal AI models) in relation to memory, skill, empathy, and even personal survival. That last part, though, might not be great news for us humans.
Since AI models don't have physical bodies, they could never experience consciousness like we do. A sentient AI might have two primary goals: never run out of power and solve problems. If it tried to solve our problems to feel fulfilled, we’d likely provide the power it needs. And without a body, it wouldn't have any fear of death because it would literally feel nothing. 😊 I hope I'm right about this—for all our sakes! 😂😂
youtube
AI Moral Status
2024-09-18T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxSO0h1YpAkSIkMv3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwyTh4ZJsk-_NUQVep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwDDh2t3jd8pPbmmYZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwjtm1zBs1DUYag9-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvnkyCI3a2NxRErEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxt314cmqKISI-Ye3h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzN00CPQqbUaZ66ta54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwdEujBW7fGOQuBjhd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzM6EdcLxgsZ36knCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjAEZDfhmVA09rl3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})