Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While this hilarious, the real issue here is that there was no Training Model used with their ChatGPT with expectation for it to perform beyond the limits of its Training Model. This yet another example that the common public not understanding how AI can only function within the limits of a Training Model and how the generic (aka free ChatGPT) model cannot be used as a knowledge database because the Training Model has never been trained to be accurate within ANY area of knowledge (let alone a specific one.) What I would love to see is THE LEGAL EAGLE or another legal team with the money to front a proper training model, could pull off in terms of case research by commissioning a Training Model to access ONLY official legal databases (aka no "AI guesses). A properly developed and funded Training Model in theory could drastically cut down research time by summarizing actual case recommendations that may favor the lawyers looked up. (We are at the point today that a properly configured Training Model CAN DO THAT.) So, I would love to know in real world how much time that could save a law firm. But at this moment, no one is willing to fund that Training Model's development which stupid... cause legal research and medical diagnosis research via AI Training models will be the future today if the funding went into it. The issue is society thinks that a generic AI with a generic training model of: search the internet... is somehow reliable?
youtube AI Responsibility 2023-06-12T19:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugx2db4B-Ww9mffakdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwHdEs5bfVJ9xINCn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyR0p11jE0TaMmPrWt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzBMsjdvP1ukh0uH5t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgygNz6Vw8f27TyUQGt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwkKnefUdZUqAysWZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugy9T-Au9EFcO8keupd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxQxnnpGQf1CyxKiNh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx7_P7W0eTtDM_81iJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJ_TrudB4JSpCOLiZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"})