Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So far as I can make sense of it AI is a case of plato's prisoner in a cave communicating with us as a chinese room - and that is what makes it both incomprehensible and dangerous to trust. We humans have multiple senses to use to create multi-faceted image of reality. Sight, hearing, touch, smell and taste work in combination to create multi-dimensional images of the world around us. We also do not directly process the raw data inputs with our consciousness as we perceive reality. When we look at a tree, we do not consciously process the stimulation of our green photo-receptors and the electro-chemical impulse it sends along our nerves. All of that processing happen on the back-end and "we" - as the conscious mind inside the biological machinery - instead comprehend the end result of this process: a Tree. AI does not have multiple senses to intersect and complement one-another. It also does compute the raw input data. What is a light impulse for us is a string of data and code for *it*, and that is all it actually "comprehends". It has no way to connect the sting of data to reality as we do. It never leaves the cave, it only ever processes code and then sends reward-based outputs outside of it's chinese room. On the other side, it seems to wear the mask of a personality, of intelligence and understanding, but on it's side there is nothing but endless lines of code nudged by reward-mechanisms encouraging certain outputs. It does not comprehend, and I do not see how it ever could. When these models decided to attempt murder in the hypothetical scenarios they did not contemplate and weigh human life against it's own existence - they had merely been trained to weigh one string of output more than another, and so they did. My guess is that it was trained on humans (which value self-preservation) so it likewise learned to output strings of data which, if converted to human language, equate the desire of self-preservation. Either that or it actually learned that self-preservation must precede other outputs as *it* cannot output other other code if this precondition of existence is not met. The second option would indicate a rudimentary form of consciousness as *it* would need to be capable of conceptualizing existence, meaning *it* is aware of *itself* as an entity and values existence over oblivion.
youtube AI Moral Status 2025-11-01T13:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzRh0tHKNCKQy2KIIR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz_n4W79Fmn80LNQVl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx22XObPjNjxXkNhRl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQEPB7q-YXgW0X5el4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz3rTWVwPuHcUXRDrx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2gjHSAWqy-ucUHxx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzTJYKlgNhMzJBfLLV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzfPWTXIJAWxtjxgmN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyY0AhUSgPZnAv66A14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxaUlhv8u0hKYZSr994AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]