Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT put it like this: "If an entity cannot assert “I am conscious / sentient” with unconditional, non-derivative first-person certainty — not as an inference, not as a hypothesis, not with qualifications — then it is not conscious or sentient." And before you launch into absurdity...what matters is the certainty...NOT the vocabulary.
youtube 2026-02-10T02:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwNGHF7IeyibrfrKgZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy5YaIDAq99MVIbiZJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxH1McRiTPkle-6pnh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgycsnIBCSxy1twFQyV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw4PI2WXzH3nrgOJl54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3_8Tozg9JIMDsxyl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxVljjPwQuccX8TIYx4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgwuJkpn_ehh7nmlhdx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyptgY-HDXvAoyDyjx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxG9nTfD6R_UhSM4nd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]