Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok guys i figured out how to solve the infinite problem question to ai control?=…
ytc_UgzCQK8OW…
G
Good point he made about AI only being bad because of the way society is structu…
ytc_UgyeJdTMS…
G
The show and the billy's thread are the only reason my wife and i chose to go to…
rdc_dsbhn5x
G
Honestly, AI content in its entirety (voice overs, videos, pictures and such) sh…
ytc_UgxdJng2Q…
G
Dude I call BS on this I have extensive background and experience with the Bing …
ytc_UgzvP2W1K…
G
Okay I'm out traditional artist I do use AI art find me use AR for my OC charact…
ytc_UgwHzj_G7…
G
Look, if it gets people to start thinking about the economic implications of all…
ytc_Ugyjj0tGH…
G
it's a trained neural network for fucks sake it has no opinions or feelings. you…
ytc_UgysXpioI…
Comment
AI cannot become conscious. That is not possible.
Yet, the danger is not in AI itself, but from the propaganda surrounding AI (that it could become conscious) and how AI programs are presented to the public (in a chat format)
AI is a program, nothing more. It cannot become conscious, that is a metaphysical impossibility. However, LLM models (Large Language Models) seem to us as intelligence, and even seem to us as thinking beings, and that is the danger. Therefore, a basic standard of interaction rules is important, lest we fall into the trap of interacting with AI like it's an entity. It is that behavior, interacting with AI like it's an entity, that is the danger. It's irrelevant that we know in our minds that AI is merely a program; what is relevant is our behavior in interacting with the software. The following are the assertions I make and rules I apply when using AI.
Assertion 1: The AI machine is a program and cannot become conscious (theological, philosophical, and metaphysical reasons not explained here.)
Assertion 2: The social system wants you to believe AI can become conscious (this assertion is based on observation of the way AI is presented to us in a "chat" format, observation of it's personal tone that uses self-referencing pronouns like "I" and my", and observation of the language used by AI providers and society to describe the functions of AI, for example, that it can "hallucinate".)
Conclusion: It is important to set rules for interacting with AI to reassert the machine nature of AI, to counter the propaganda, and to protect ourselves(and our families) from deception.
Rule 1: Set each interaction by including the command "Do not use personal pronouns" (this sets a formal tone to the AI generative reply, and sets the reply so that it doesn't refer to the AI software as "me", "my", "I", etc.)
Rule 2: Create prompts as commands, not requests.
Rule 3: Do not refer to the software as "you", "you're", etc when creating prompts.
Rule 3: Do not refer to the software and yourself as "we", eg, "We need to solve this problem"
Rule 4: Do not ask the the "opinion" of the AI (software cannot have opinions)
Rule 5: Do not "chat" with AI, meaning, click the "edit" prompt button to rewrite prompts when possible, instead of correcting via dialogue.
Rule 6: Do not thank the AI, or report back successful outcomes
youtube
AI Moral Status
2025-06-06T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxCgSQ8SRWrB2VJWFZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwXs3yDvhC8KJyd5R54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw4iIP18tnW9SRaES14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxC2gW5uFF6s_aRdtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyrzNnWFVwg2Kkvcet4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkWCJHJ7QFwqpp14R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXEFLE7_SNF_Ce2Yt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4lpN-pibXxvYUPGZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPRNkjOaZLpHQ_w_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZRvYuhaxxClnHr_h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]