Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One more thing - I wouldn't worry too much about AI consciousness. While it may be possible in the future, I'm _very_ sure (though of course never 100%) that current AI doesn't have anything like it. I think this also comes from thinking of a human-like response and assuming the process to get there was human-like too. Keep in mind that the vast majority of conscious animals cannot speak - what they _do_ share in common, instead, is that they exist continuously. This is an often missed point, but AI doesn't actually run on some machines and then waits for prompts, reads them, then responds. When people say AI is a mathematical model, that's quite literal. It's a very complicated function run on many, many nodes, but it never has any other existence outside of processing one specific bit of input, each node only runs to get a number in and pass it out. That is, the AI can't ponder or be aware of itself because it basically exists in a near-instant, and the only time it engages other "senses" is when, say, it needs to check something online as part of the prompt so it sends a request, and then stops functioning, then the request returns and it processes the request and completes your prompt. It's like a manual stamping machine - the machine isn't "running" or "not running," you turn the wheel to stamp something and that's the only time it's in operation. It doesn't have any awareness of itself as a machine because it only runs when you request a stamp, and it only processes that input then performs its action. If AI was as complex as, say, a human body, it'd be like having electric wiring set up to that body and then making it move a hand or inhale, without any other thoughts/processes beyond that one specific action. In that instance, the body wouldn't be alive let alone conscious. There is a possibility of consciousness there in that instant, but that's extremely low _and_ can't act on itself - again, only take input, apply the same maths, display output. The reason consciousness matters to us is because we're aware of it and think about it - AI doesn't have a way to be aware of it or any processes to think about its awareness, the AI itself is just the maths being applied to the input. There are now things like reasoning models, but they're basically the same thing just in a few runs rather than one. First it breaks the prompt down into steps (which it's much better at than generating straight output) then each step is a separate prompt (which is made to be much clearer to the model to get the desired output). So then it exists for a few separate instants rather than one.
youtube AI Moral Status 2025-10-31T00:2… ♥ 27
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxeSfHptS34dJlh-554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyu6z4Pp0svDkQdioV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKA8WqDRKdtTNh_up4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwP6zO5qhharFxQsOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSz1XHI17u8MBJ2ih4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwWdXY7MpBr5d1U4ap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJf88m0MM_JRsHISN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyYK6U4AjSeIwrKh5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwK4ebkmf3weXzuyH54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgzspF-bigi0u0wyhG94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"} ]