Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI Godfather's statement about the letters going out 5x more through AI so therefore you need 1/5 of the help works perhaps in terms of getting out letters, but it does not work in terms of did the AI actually solve the complainants problem. Getting a vacuous AI letter that can not recognize nuances of situations and gives a formulaic answer is not satisfying to the worker or the recipient, even through the shareholders are now richer! In health care, I have already seen AI give inaccurate diagnosis of an X-Ray, inaccurate record of a medical encounter, and certainly can not provide compassionate care. Hinton and Musk talk efficiency, they do not talk effectiveness. What will be missing in this world: compassion and concern. The discussion of automated weaponry was really disturbing in that neither the host nor Hinton talked about the fact that people in the weaker country are being killed. Humans suffer from that and performing warfare by "video game" weaponry loses sight of the terrible loss and suffering on those killed and whomever remains behind. The ability for criminals to use it to commit murder. You want to know what I would like to see differently about your show---talking about feelings, compassion, and care for individuals in society.
youtube AI Governance 2025-08-24T01:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxMudJxZjeaMgb5-p54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzuqB2LCE9Kn7xIzHF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugyc7eju22zKvfe_As54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugye36hS-612PeCouzN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwYySQO5BVViBP3d6d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwsgHCCAyQ0tzLm_NV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxQFVMoMi7LYCZuhvZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwM1gm4nmLDb__sMql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxMZwVy5-ibmNLpNJh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyC1XiFqwAn048LzLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]