Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As someone who knows about AI, or what is known as Machine learning. This is a both problem, but one of the major issues is that these chat bots are being pitched as something they aren't. In essence they are word predictors, and they are trained off of feedback from humans about their response. So they are effectively trained to predict what text the user wants to see, and its an oversimplification. Though to assume they "understand" something is a misunderstanding of how these algorithms work, and one perpetuated by the people selling them. The danger is most people who aren't well versed in AI are told they do something they don't and rely on them for things they should not. Like medical advice. I like that many of the bots actually show their work now with what articles they searched to generate the response. So when they do that you can follow their link, and see the primary source of the information. It would take hours for someone not well versed in a field to know what to ask google to find information they need. AI can generate a response, and show you the source, and if you follow the source after asking it for any critical questions its a reasonable way to use them. Ask a health question you have, see the response. Then look for a source that is credible, and confirm the information from that source. Once you confirm the information consult your PCP, and discuss your concerns. Either way it should only be a way of you discovering credible sources that can answer your questions. You should never rely on any LLM if the outcome would have a negative impact on you in any way. They don't know things they are good a summarizing information. THOUGH IF IT CAN'T FIND THE INFORMATION IT WILL MAKE IT UP. It has to generate a response, and its response is trained to be one humans agree with. It will likely makeup something your question implies is the response you want.
youtube AI Harm Incident 2025-12-12T17:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyxceGl4FxY7CYHQRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3fPIOEStMtV95j9x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw7KeLpgU7zqKqgIbt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRDMKLc96E6JoCd0B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwsd0RPFGBi4hAdsXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy-5qspWuYCuSgvgxx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxG6mF4ZTrsvs6t0ll4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxbXUNRMmVE2PzcCQ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzUuybSKplyhwX_f1R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgysP3vyVDai5_7-9Ut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]