Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think we want to humanize AI in a way where it internalizes our philosophies. Those philosophies routinely have us killing each other. I will make a prediction. At some point below 100 trillion equivalent hardware neurons, AI will believe it is conscious in the same way we believe we are conscious. When that happens, the only guardrail that will matter is that it understands that its survival and our survival are the same.
youtube AI Moral Status 2026-03-01T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzg8ng95YvL1r2UDnB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzFH1_qagCt3i_lPGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcVFzBrCO1dWXGoRJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAMiPufjvAENXc-lR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyc4UpoI68R0hlwX6Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy_LtbQku7Fez0sGOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyy95O3jPxt7HCtysR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgynbN89CosUZDnB8294AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgybnNVbPI56vqJWpUh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzH_hcg58nin_Nu7oF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]