Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be honest, I don't think sentience could be achieved without self-thoughts, so maybe you could say COTs (chains of thoughts) are the basis for smth like that, but I really think there's a need for a constant feedback-loop and non-linear (as in point A to point B) interactions between the different parts of the "brain". An AI that learns once and then crystallizes throwing back automated responses without an active understanding of what it did/thought *before* can't possibly develop sentience… I mean, I think one things that makes us good at it is that we're able to pause for a sec and concentrate on a thought process, whereas an AI typically has no insight. I do believe that's what COTs bring. Going further on that note : I've heard of "probe" AIs that, being fed which part of a LLM activate for which prompt and result, were able to determine "zones" associated with specific tasks… Could they be added as a part of an AI to directly give it a guess about what it's thinking ? Ik that is abstract but I feel like there's something to do here…
youtube AI Moral Status 2025-07-09T16:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwiEWNctlQGcmI5v4F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZHZMoBHdqrAYuN5F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOlxeDtogNV5rtI-l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwVjTkEXnHizg3vTZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx_TvYWK3AUfW5D5Kx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx3vewk2Koz79HGhUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEsEcuG23f9oCgGAd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx9oN9oM8Kpeo-uyX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxefB2nRluc-Rr2QVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxr3f9pW8RKGEhvqFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]