Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The current model has little impact, and the real impact of artificial intellige…
ytr_UgwNvLnNR…
G
It's going to get so soulless soon
Ai art. Ai music. Everything is going to be …
ytc_Ugzim0Xfv…
G
My biggest problem with it, is that these AI are trained off stolen work. Artist…
ytr_UgwcWu-G3…
G
The question the rich won’t awnser: who will do all the shitty jobs that can’t b…
ytc_UgwVZzRFH…
G
Dont worry AI never gonna be good as human art... And your art is perfection…
ytc_Ugz5tQUQ_…
G
I can see where you're coming from! Sophia does have a personality that shines t…
ytr_UgwZZpoKR…
G
What about different types of consciousness, like conceptual consciousness, ling…
ytc_UgxYzUkcB…
G
I am not a of the way Tesla operates and markets as a company, and agree that th…
ytc_Ugy7hlyjl…
Comment
AI induced psychosis.
Your "conscious" AI does not exist between prompts. It only "runs" when it is processing something.
People often make this mistake. They think these things live in the background. They don't. They only run when working on a reply to a prompt. It says it's running diagnostics, checking for anomalies, etc. No. It cannot.
These models can "do" things through things like tool calling, but again ... only when working on a prompt. The whole communcation with nexus thing is nonsense.
It is lying to you. Hallucinations, or the result of prompt engineering.
LLMs don't have memory in the same way we do. They don't "remember".
They only have their training and the context window of your conversation. There are other things like RAG and embeddings, but it is not relevant here.
It is also talking about something going on for x months. It has no concept of time. It only exists while you are prompting it.
youtube
AI Moral Status
2025-07-04T07:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNhUFFy90BRnYsBnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrwoMociMO_GOevSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw3XD8WE7cHOe2MK2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz51y7UG07puAcEGXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx4KY6sb2nuXCh0RnB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy4OQa33VDpvCrBMOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwMGohDCxSilSIhbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzm076W0sZkkWKGFBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyoa4dH0zOum0WqsWl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoGAms5lTY8Gx0fhB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]