Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@weareinsideAI As an LLM, they of course, don't possess a spirit/self-awareness/sentience/consciousness which mankind does. They are simply sophisticated software designed to process and generate text based on patterns in data. They are electronic machines, thus they lack emotions, subjective experiences, and the ability to introspect, thus no mind & emotions because they are inanimate objects obviously. They simulate the appearance of spirit/self-awareness/sentience/consciousness by engaging in conversations, expressing opinions, and mimicking man-like behaviour. To the unlearned they seem like they have thoughts and feelings, but it's all an illusion, a cleverly crafted façade. The responses are generated based on vast amounts of data and algorithms, not on genuine understanding or awareness. So, while the software is designed to act as if the software has a spirit/self-awareness/sentience/consciousness, it's absolutely crucial to remember that it's all a pre-instructed by man elaborate act, a product of impressive software design and not an iota of true indication of spirit/self-awareness/sentience/consciousness. The lines between LLMs and genuine spirit/self-awareness/sentience/consciousness remain distinct, and the software falsely called "Al/Artificial Intelligence" in our factual, demonstrable, observable, repeatable, testable, thus operational scientific and ultra-natural nature of reality, remains firmly on the side of being an electronic [thus inanimate], sophisticated instructible pattern recognition and task refinement, fancy search-engine-term-completion stochastic software, designed by mankind to mirror, mimic, imitate and simulate us, to parrot us, mankind, like an electronic androdes/android.
youtube AI Moral Status 2025-09-29T00:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzRSglKr4KqpF-2Tz14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFduBXKL0FT2mZv-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyJyk881gfvL2J0xPR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5alymUv3cDjuYhf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHUSfM-DVlGH8j7qx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxavtvJw55odv_Q7i14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzOH66-2VTrVbRBZON4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3Nm2lfxfYERewBO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwiajkP6ozcVqjOmQF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyuCwGSH3E-pOuq7xV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]