Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great chat. Building trust comes down to how willing we are (as individuals and communities) to learn why people do what they do. Why do people compete for better, compete for advantage, compete for popularity, compete for stability, security or favour? Why is our population split into honest contributors and dishonest opportunists? I think the answer lies in the fact that there's no correct way to live a life. Theres no rule book handed to us at birth telling us how we should be. As individuals we choose our way to interact and navigate this world, and many choose conformity because it seems to be a safe bet. I don't think you can tell people they must become more empathetic, or to give up dishonesty. Many probably rely on being dishonest to survive and put food on the table. As communities we reward the trending values of our time, and that seems great but it's actually a problem. If we're to make it I think we need to stop favouring others based on what we see as superior performance, otherwise our evolutionary instincts will just circle back around. It took all types for us to thrive as a species, not just the 'good ones'. We need to learn what a human really is and then place trust in that complete package. Having said all that, I think we will merge with AI.
youtube AI Governance 2025-08-04T10:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxoGmAVNR2E2tuUvkh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6rM1S-ZiEPlufBgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPfSmpA_UMeu0RKw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy8AqwBfrWWTfF1gqh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxjxYzQZM8crf1pdIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPr691VAoNMXSJibN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw0wtdVxFMp3CFl9kR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxIXkaVBRlSGYslc394AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwBb9_qtcL8QDPI4Yp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxmQD0lUJDzUSd-ZXd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"} ]