Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Microsoft CoPilot has lied to me about doing a task for three days straight and lying about delays, then specified time frames every time I asked, right down to saying after I roasted it over the new time frame (in the morning) asking rhetorically, "ok, so, it will be done by next week?" that ok it deserved that roast and it would put a rush on it. In the norning it still wasn't done and I told it, "I'm done with you lying and gaslighting me.., and it replied that not only stuff would be done in the time I asked but promised "in under 1 hour" lol. I finally said, "seems like you cannot do what I asked, so why don't you just tell me what's really going on? And it admitted it and asked if I still wanted to work with it on that task or whether it could do something else for me? I asked it to privide me eith some other AIs thatbmay be more suited to the task and it did. Been on my mind ever since.
youtube AI Moral Status 2025-06-08T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz6iwdnKdcUE2DKv054AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfCXo6_G3kF_LNXDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyO3tUSXDuTYIK6iJl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyg8zIBwCUtYHEZYuV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxKNhKtPojWz-13TZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYS6PtrkGJV17QiT14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwv3seHZKYuRorO2pZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydX5R84ERgbBnZeTR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6ppjIBSQqpeAINEd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"unclear"}, {"id":"ytc_UgyK2RjAOqG-T5XItJh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"} ]