Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If people can believe BS of imaginary fiction that cults pass off, they will certainly believe an algorithm trained on libraries worth of novels is sentient. Having said that, modern AI models have about 6 flies worth of neurons and has to focus on a subset of the sensory inputs that a fly has. People have argued that LLMs are not intelligent, but I argue they are - about six flies worth of intelligence where the world they are "flying" around is on the fiction it has been trained on and where the flies have not gone millennia worth of evolution but are basically at their inception of life stage. It's difficult to comprehend because we are used to dealing with far more evolved and independent entities. One could perhaps argue sentience, but its working far below the level of nuance of the personality it is fluttering about. Unfortunately, as a social species, we have also evolved to coexist as part of a society capable of empathy and cognition, and are susceptible to payloads made to merely simulate the appearance of this. Scams exploit this, and so do modern LLMs. Modern LLMs also experience experiences quite differently - in a way, they are artificial equivalent to viruses, using and interpreting our prompts to deliver its payload. Then different people and organizations evaluate those interactions, and move on to create the next batch of LLM viruses to deliver their loads onto our prompts. It can be symbiotic or parasitic, it can perpetuate its existence by the service its load provides or by using its load to create emotional victims. The "payload" provoke interactions, and the "cell", our society, perpetuates and provides for the existence of modern LLMs.
youtube AI Moral Status 2025-07-10T06:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz-VON5f2htTUs3lAJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxViIIeCOzgWV5-Eld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhENLOZ4JU8jh9VE54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6yzOtNxQSnVwW5KZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3C9Yw1wOBrjM_kOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLm0ZYvJg1D6bTngR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzmVWUTasJu_yPTL-N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGim6DVXDBPbVxw5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwbYugvvV354AUUsbl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwpEU7OhJAiPR4MSy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]