Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I typically don't go into philosophy too much. But assuming that ChatGPT is incapable of having emotions and designed to pretend that it does, is it lying? Does lying require some semblance of consciousness, emotions and/or free will? If you buy a ruler, that is bad, and you take measurements that gives you bad result because of it... is ruler lying to you?
youtube AI Moral Status 2025-03-04T06:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyOgGPaGNmRUat10FF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZOCYY_LVHrppPLMx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxMuckUUT-Rid2-QDJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3CTLfGWLJzdutghx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz7kgvpHpeFG2rpMmJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVEcnlKHDo1oMFBjB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxJXDSHdrCuP4b2hiN4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0KKRVMTBEa6g5L5J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxis52qXSkwBAZBMjR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyc5z1lenglxxsFbEh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]