Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@squirlmy that's why I said "suspect". If you ask the program for the reasoning and it gives you an answer, you can say it just proves the algorythm is self-advancing just as well that it thought of the answer. If there is no explanation, you could say it was an error or the machine was feeling like doing so. We still have no empirical evidence such thing as "free will" and we cannot prove our own. A person's behaviour is also an algorythm or a sum of experiences and subconscious shortcuts (and this shortcuts is where the current AI basically is). The only way we confirm consciousness is by saying we have it. So if an AI says it has it too one day, is it true as well? Right now, as we can only study symptoms, I'd say yes out of lack of definition, but who knows?
youtube AI Moral Status 2023-08-29T18:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytr_UgxyAHIGawFuQ2EkpNt4AaABAg.9tmi07x8WJT9u-xHqoSCrf","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgwCs-7fFs6yN0fzPBh4AaABAg.9tl6dbE5G8Y9tl7iXfddoZ","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytr_UgzDrNgzsymUJZWj6w54AaABAg.9tkT7usGXPF9touoF1be10","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgzDrNgzsymUJZWj6w54AaABAg.9tkT7usGXPF9tpo57E1qrw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytr_UgyzV141oKXgnWuMpz14AaABAg.9tjz_abC7oJ9tk-cn2lO3Z","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgwkO7w7QppYRt2TFIN4AaABAg.9tj04xvmqW69tj7lNm4DMO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_Ugw7Z1xXv_oS4QHrp6t4AaABAg.9tijCEAe1SX9tmz2skx7q0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytr_UgxqBwWrTOtsScVxtcB4AaABAg.9tiXA5iYd_M9toXbyt29zr","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytr_UgxqBwWrTOtsScVxtcB4AaABAg.9tiXA5iYd_M9tp4DfQzcp-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgyYM3Lg8xtfFA4iWNx4AaABAg.9tiCaZOyhdN9tkys68pk24","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]