Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"harmful" decisions: in real public decisions, there are always trade-offs; and difficult if not impossible to find a decision everyone will agree isn't harmful. If a decision no one considers harmful doesn't exist, demanding it of AI is demanding something no human institution has ever achieved. The only solution is process: audits, transparency etc. but these are things people hate doing. "biased" decisions: around 70% of people think the BBC is biased but can't agree which way; it splits round 35% say it's left, 35% say it's right, 30% say balanced. A minority occupies the "unbiased" midpoint – a statistical artefact of a dumb-bell distribution. Same problem applies to AI: "biased" relative to what? Humans form political parties, schools of thought, religions etc. because we can't all agree and forcing an average will frustrate anyone not at that average point. History shows we never agree, but that won't stop utopians in ivory towers insisting one day we might. DeGrasse Tyson shows this clearly.
youtube AI Governance 2026-03-25T11:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwIqNU_TMR537ePFTZ4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwvSMYiYT_IuovoE314AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwY-jQW29BYypAf75F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugz5TFxae2j2uFRv29R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgynE9iH1O3nO18qyCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy09ItQRfK9BBvo-8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxt_5jb-i6PwtrWlz14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyX_x6pmFaQoqqHYGt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzK0YstJ0vgqcNzZEN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOoLGrCvVUx8LfmrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]