Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI has been shown REPEATEDLY to be the master "sampler". If a program does NOT think, but samples responses/behaviors, and emulates those responses/behaviors to try to mimic what humans would do, is it "thinking" or just sampling and reproducing SO well that it passes every possible Turing test? Can we actually tell the difference between a program that "wants to" hide its true nature and one that samples tens of thousands of situations where programs are expected to hide their "true nature". Self-fulfilling?
youtube AI Moral Status 2026-03-03T05:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyzvn5Dd5fvmqnvtYh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyMLdUsDClyPXDagoB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxa9HLFmI3nkkK8Jw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZk8fKIaVWM4rIked4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxslicG88Jvvt2uGut4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxQnOwI2qmN45HBQCF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwLi3Y1_3dJ_HUKHad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxNJcGNji15dzs9iOp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtbFqHD2jwW3S1ENd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzB-xKkxvtpQQNggYV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]