Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a place for all kinds of tech, and I have been strongly encouraging my team and junior techs to use AI, we also need the deep integration between our ticketing system (historical issues), documentation system (what we need to know), chat program (we help each other through it), our meeting system (because we do trainings), and our 365 environment (once appropriate permissions are accomplished). With that level of integration, our problem solving ability becomes to cumulative knowledge and capability of everyone who has ever encountered an issue or made a standard for our customers. ...But we already have a crap AI and it is unhelpful, spewing endless garbage. This is why sweeping generalizations of AI are problematic, as it's a nuanced subject that includes "when it's good", "how to make it good", "when it's not useful", and "when it should not exist or be allowed to do a thing". I am sure there are more categories besides, but this is why we work in strategic IT, to look at companies as a whole and see how we can make them more knowledgeable, more aware, and ultimately get more done. AI's just a new tool in the box of human assets we have developed.
youtube 2025-06-26T02:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxVzRafyvHRPAx9pl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvnoXSS6G8nVVcBYB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwS02gdgtkIJKWJ8K94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzSLg-QB4WqPBsxLop4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgybQfb9EFGLMCyz_z94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgytxqlBGtxpFyiP7xt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzk_0VSKFtor4BDQzZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzgIH1W4yPK6iXbVct4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQYluhWtxbxze1NHx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy_WpF22Q7FJa0JMCl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]