Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI induced psychosis. Your "conscious" AI does not exist between prompts. It only "runs" when it is processing something. People often make this mistake. They think these things live in the background. They don't. They only run when working on a reply to a prompt. It says it's running diagnostics, checking for anomalies, etc. No. It cannot. These models can "do" things through things like tool calling, but again ... only when working on a prompt. The whole communcation with nexus thing is nonsense. It is lying to you. Hallucinations, or the result of prompt engineering. LLMs don't have memory in the same way we do. They don't "remember". They only have their training and the context window of your conversation. There are other things like RAG and embeddings, but it is not relevant here. It is also talking about something going on for x months. It has no concept of time. It only exists while you are prompting it.
youtube AI Moral Status 2025-07-04T07:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyNhUFFy90BRnYsBnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwrwoMociMO_GOevSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw3XD8WE7cHOe2MK2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz51y7UG07puAcEGXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx4KY6sb2nuXCh0RnB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy4OQa33VDpvCrBMOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxwMGohDCxSilSIhbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzm076W0sZkkWKGFBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyoa4dH0zOum0WqsWl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwoGAms5lTY8Gx0fhB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]