Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a person at work who the last few weeks has been sending emails to team members giving opinions based on the output of chatgpt and questions asked. I'm all for generative AI as a resource to help form an opinion, the problem is you should have enough knowledge to be able to judge whether the opinion is actually valid. All that is happening is that the credibility of this person amongst their peers will no doubt suffer because the opinions being shared are incorrect and if we blindly followed the opinion of a system who doesn't know the history and processes that are unique to the business, when adhering to the suggestions, then we are in danger of losing competitive advantage and delivering sub standard outputs. That is why guard rails need to be in place for how AI is used. It unsettles me that a simple question to a system that has no internal knowledge of a business can be considered to be gospel by people. As they used to say those with little knowledge of a system, believe they understand the system whilst those who know a lot about the system know enough to know they don't know enough. This is why so many people regard themselves as experts these days. When I was at university the term expert wasn't used a lot, but in business it is rampant. The Dunning-Kruger effect where people with little knowledge overestimate their abilities. Although it was Isaac. Newton who mentioned the opposite, where despite a life in science and mathematics he still felt like a little child playing on a beach whisly vast oceans of truth lay undiscovered before him.
youtube 2025-05-16T22:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxlP65BRFBX7OkebNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQJ6YeywcoIqzKebx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiEjGBN6em2kg0NON4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyHkk_Wcmv5Yqk-35h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyKF84NZCucAazabDF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_RxcnJ_UhADyg6_p4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyf_falUwd3Y9zEme94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugya3m2C50clexfAIVV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgxL8iktbF3LsHlGsGx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxannRGyfraquPKNwR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]