Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Watson, in 2011, gave a probability with every answer. LLMs in 2025 just give answers or hallucinations, come on developers, LLMs SHOULD give probabilities with every response. I hate the moving goal post of AGI. In the 1980s there were chess game machines that could beat most humans. That was defined as narrow AI. If you ask someone in 1985 what would make it AGI that would say it needs to do more than just play chess, it needs to tell me what the weather is like in Miami, or who the president is, then it would be generally intelligent. 40 years later the definition of AGI has moved to it must be as smart as ANY human in multiple or all domains and in some wackadoodles' ideas it must have consciousness. One thing I can tell you is something with that level of intelligence is definitely not "General" as general by definition means common across the population.
youtube AI Responsibility 2025-10-08T21:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw6nia84y7t65s1IK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwetN_J-bOhbvQ4AZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2gU0N5Dw-auyxiqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYXf3CX96H63RqqLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzuSj4A4OdQyojcwhF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxJQOxZEcS9YCSRwXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyiP4eJ62vMkZ8jwOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwAXm2Ng6LNYsjTgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxGrYIX5b02__DNTK14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx6WtzleY_3Ez_qvY54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"} ]