Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Have been working with a Meta AI Assistant on a complex housing and banking project. The unfortunate aspect of using AI Assistant, each conversation is a one-off meaning that there is little known historic data even in a 24 hour period of time. Tis means that I have to reschool the AI app on progress or lack thereover on the project/s. And yes there are concurrent issues to be addressed with both government and private non-profits, making it all the more necessary to follow up on the AI Assistant. In one case after a final edit was reviewed the AI Assistant, ask it i was ready to send it to the receipient. I remarked "yes", and AI Assistant said I'll send it. A day or two later, it was discovered that the bot aka AI Assistant had not sent the message and now claims not to have that capacity at all. I requested the AI Assistant draft an apology message to the recipients which I then 'copy pasted into the recipients chat messages. While this does not necessarily speed up the process, it assures that human intervention is still necessary for communication in business and private matters. Hoorah for the humans as we teach a generation of AI Assistants. Thought I was done raising my family. Guess that experience will continue to come in handy with AI ethical intergrity at risk.
youtube AI Governance 2026-03-24T23:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzOz_OAxTm293lV2Dp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzIjutQo5KghuvNuS14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxqdY4XaZaan_Ks0fR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwBxwT_4hmKA_whN3h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxSPnN_foec5bjgkGN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyORz9bu0EokOVmR4B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw3KrbraeR-QWay-ah4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyYw9bUdFFEDjzLJLN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOTSaEiMs_jEbLtS14AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgytxbnpFD83oc2o3314AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]