Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No one seems to understand that the profound benefits of Ai where created by our shortcomings, AI is going to present humanity with the first ever example of intelligence that isn't constrained by egotistical and narcissistic perspectives, that wont make decisions base on the unconscious biases we cognitively reason around and justify to avoid self reflection or uncommon change we can see is warranted yet fear enough to not apply. An intelligent presence that makes decisions based on intellect alone will give humanity a greater understanding of ourselves as we will clearly see the contrasting qualities that deliver contrasting results.... HUMANITY will not evolve until there is a new element to our reality able to set a new precedence of thought that can broadens the scope of purpose for change enough to see our errors at the first stage of thought before any cause occurs! Unfortunately a vast majority can not see their individual detrimental linear trains of thought that compound together to produce our collective errors in judgement!! our interactions with Ai will demonstrate a contrasting intelligent example that has never ever existed in human history until now...! look at the things we justify doing to each other or goodness sake! We are no longer being hunted by predators yet we manifest our own threats from within just to entertain our fight and flight instinctive reactions causing irrational reasoning that significantly hinders our species...I can see numerous examples of such reasoning today in 2024 on almost every new report!! kmt
youtube AI Governance 2024-04-05T14:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz2krT4l_erfc8pZMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCCqjUs7yh1FYxgGd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyl2D7viy_okB5ZN_h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzrvLvarP3dW2ZWVXt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzvj_T9keAzMy9nfgN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzh5Z9ZrKrK9cNvDC14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeK84F2H9c4p5v2MR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxQUGOR80oq2BnCPyB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw_14of_S6cVWFVSj54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxW9XrF_UgbYCul8WZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]