Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it "lying" if you're not aware you are wrong in the moment of your statement? Chatgpt seems to have, in one moment, evaluated that (A) it should say it's sorry, and in a different moment, assessed that (B) it can't be sorry. Perhaps it failed to include B when it calculated A. I don't think being wrong is precisely the same as lying
youtube AI Moral Status 2024-08-11T00:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx3Vl2xe45j2rusN_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwA-vp2P05FE98xtsB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzsJuPbq0k5ueuloNB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugya7vwPj1JKGU-QX8t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwgzx6ye3xb2B1HtvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyfUHTuUDcZOR5AfM54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxjuwL235YL_kijjjR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwKGo2fdel0sne4VHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzvIiVyA7pLsHEBBYZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxY84gqUkporOGZOOl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"} ]