Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"treat it as an improvisational actor" is really just a way to speed run "hallucination" (if you cant tell i hate that we prescribe that term to what is effectively a weighted database (wherein all data has a location in a ridiculously high dimension space alongside a weight value) that starts from a seed and uses linear transforms to determine the most likely next token, its like calling a bad prediction off of bad data using a bad model a "hallucination" its not, its an error, ai firms are just too afraid to publicly admit that their tool can make errors (bad for bottom line and all) so now we have this term "hallucination" ) see when you get it to assume you a role you are also getting it to start from any biases inherent, no the real reason to be 'nice' to ai is that it aims for appeasement rather then correctness (it literally has no way of verifying if any of what it says is correct after all) and if you're mean to it it will interpret any swearing,insults ect as a sign its not appeasing the user, it will then shift its focus towards appeasing you, this itself often leads to hallucinations as it begins to seek out only what matches the users experience, and any attempt to break this habit will typically lead to more ill treatment and a further need to appease. REMEMBER AI's DON'T ACTUALLY KNOW THINGS THEY PREDICT THINGS OFF OF WHAT THEYRE TOLD, IF YOU CHANGE WHAT YOU TELL THEM THEY WILL CHANGE WHAT THEYSAY TO FIT. The best way to use ai ironically is to be completely unbiased in your questions, stripping out all of the positive and the negative, and then getting multiple ais to pick through each others work, you'd be surprised how much different models will pick up on the flaws of each other, (even if they still end up injecting their own flaws along the way) , at least you can cross-reference and learn some of what you've been bs'ed on. In short being nice will get you gaslit, being mean will get you gaslit and so will being neutral, but at least neutrality skips a lot of the appeasement bias that ais will use to respond to your positive/negative reinforcement. Do not "roleplay" with it, as that's a rapid way to get something that appeases your beliefs, not one that can actually confirm nor deny.
youtube AI Moral Status 2026-04-08T01:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxfEJUg7aa8u_6xprx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgxRhq0tZUpPddRckqF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwoYo7d6cQ-ZTE8nrl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyuC_mmleZUu2mihq14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzkQ5MxIChmT-SuCzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugz5Geko4pVhtFLy1lB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxFEiIE7K-eDdTu7nV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgwPlBxLbKQvs73JWLl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgzDDBzoIFBT1j7FrMh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugynd4vOn4o3v7fRPTx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]