Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I ran away from AI in the late 1980s for the same reasons. AI will tell you how it made its logic decisions. That has never been the issue. The problem was control. I couldn't solve that problem. They can't control it still. My solution was hard coding everything, but that was a dead end. They needed machine learning and neutral networks. There is no way to control it completely.
youtube AI Governance 2026-03-17T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxU0BrR8Ng_B8q2rEd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxA_AOLkUwpa6cUM5h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzWK55406De5A-6tSR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzmSI05Tkuwd_QGcFt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxwgVWKteUB-GMCqBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwGTm8EadowUNOdw2B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyeqTrAaXILy-sM0nF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKrVTZXf39x8ThwTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyQhPht2gFnzUypKN94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwAblayAej0t2wXSxZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]