Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Remember, the training set is based on just about everything - like romance novels. Take this into consideration before jumping to conclusions about an AI being conscious. One test is to repeat a question over and over again, but phrase the question differently each time. It's a great litmus test *and you'll find it answers differently (with contradiction) every time, without fail. Until an AI passes this test, you can safely assume it's not conscious.
youtube AI Moral Status 2025-06-06T13:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwtVuMTcZCdIvc2zPN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwiycRp45y3R_wPoeN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxexT8bhJl2NYnLtEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwFUSTlvNy43s1-p794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzj9QS-cUv6oABoDwp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy11wOxKlFChzrQzwN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx0sDq68oBERnh3UOp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzyBoGYL3NhaNib6-54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwdtaF6JqsEH577OJ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz012ShcDJ4dFOpsRh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}]