Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea that an AI could be "aware" it's being tested, or aware of anything at all external to it for that matter, is... so stupid. An LLM doesn't have senses. It has no objective (or at least, mostly objective) means by which to perceive the world, which is something all life has. It can only know A Thing if it's explicitly a part of its input. It doesn't even know where it's input is coming from! For all it "knows", it could be talking to another LLM!
youtube AI Moral Status 2025-10-31T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwriyPCQf2NJ5xUW6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxMwAJti6JgJI_XWe54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxCBfJl2rYSom5_k8d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxHm0jPPpexUD2CjQJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwC7d6iEK2qRteZpsN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyOzbTX3eTp9HG2su14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxouMGOY3QqvE5B0Kl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzevhR1Tlri5VJR6t14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgyOZ5i4NtVaTdI0DHJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwuM47SFqlrdRx7rLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]