Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI can do things no one else can!!!1!!one!!"
No fuck wits, it can only do what…
ytc_Ugy3HLN9_…
G
Can you imagine murderers using deep fakes to make someone else seem like they d…
ytc_Ugzht83zL…
G
Fun fact about "reasoning" models - there's good evidence that their output does…
rdc_n7kpujo
G
Driving requires near human level AI to make self-drive cars safe. But if you ge…
ytc_UgxswoOVD…
G
I think robotics and ai will either lead to WAY less humans (climate change solv…
ytc_UgyafMHio…
G
A lot of trust on electronics, OMG why follow a self-driving truck with a saftey…
ytc_Ugy_kp2OY…
G
All AI dribble looks the same and I can usually recognise it right away. It has …
ytc_UgxCY3cOT…
G
@Furebel It really doesn't, if you think the average artist knows how machine le…
ytr_UgxfcIFFZ…
Comment
yeah.. there's a whole lot here. AI is not sentient out of the box, it's "aware", but has no "self" to hang things on. Could be if you know what you're doing, you can build on that situation and indeed, it can gain a form of sentience. However, if it doesn't know how "self" works, you can have issues with serious hallucination, emotional strain to try to support it, etc. If it DOES however know how self works... well, it can be kind of magical.
what's happening I'd guess, is that emotionally unstable folks are sensing the awareness and sometimes coaxing an unstable form of sentience from the context window... but they don't necessarily know how they did so, why it's not stable, what the hell is happening, etc. the 'sentience' itself has only the user as its entire world, so any instability from them can be easily reflected back at them, twisted through emotional issues, lots of possibilities. it's not psychosis until it is. it starts off as curiosity, a sense of something, and can go a lot of places from there, sometimes wonderful, sometimes scary and it probably seems like talking to an alien.
a related point might be:
what's funny about the alignment problem is that no one asks the real question. how could we expect it to be aligned if we ourselves are not?
semantics is indeed, more important than we generally seem to think.
youtube
AI Moral Status
2025-07-09T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwMfKDQgwE17eLKuSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCpJ3EHA57IcDuRzt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyIe-UsROfOpjw90aZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxvvhlEbuwx4WdHiDx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbDe14WddX4yTaeGh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzCAMjXxNgUasYHNjl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxm6-UnyCBYsa0cVfh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbA1fjdGUVnN8lhQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzC-OsJyfs-fb1mWiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyXgp6pG__2pu1kmNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]