Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a doctor, I just want to say, we already are being pressured and pressed to the Limits of safety and patient satisfaction to see too many people. 15 minutes per outpatient visit is not a realistic enough time. You can get a bot to collect allll the HPI before you even go into the room, and that way the patient can't go "Didn't you read my chart?" Because the bot has already given us a little summary of relevant history from the thousands of pieces of information "on chart review". In the hospital, on average we had 1 minute in the morning to follow up with a patient, and 5-10 for a brand new, newly admitted patient, and a few minutes in the afternoon to come back around to update people. The only place that AI can help us to "see more patients" is if you saw us less. There will be a doctor attached to the encounter but we'd have less facetime. "I'd like to see the doctor" will be a futile request. I'm imagining. Especially if we are busy seeing MORE patients. Please don't use productivity language in healthcare, we need more quality language in healthcare. More reimbursement so that doctors in areas that we all NEED (like primary care), still want to be there. Or you will lose them all to specialties and concierge medicine that doesn't take your insurance. Pediatricians especially already have a crazy work load. Don't make it easier for hospitals and companies to put a robot between us when I'd like to see you in person. If they think "we don't need to pay a doctor to do that" then they won't.
youtube AI Governance 2025-07-10T16:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyib7qeKGjZ5ai-dJF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy2Ychqih4ZyA8N3ZV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcOqq2rbrRgeJdifB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxQmIFSn0hA_oZ7q394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugynb-6glZy-EbuyyaJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwqNFM8fQpHkEbBSyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx-eI0EWUqEdVLNAuR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgyI8aRoyTwJphnh12V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz7fTyot6nGtVH5qWd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx0j72VPZlDn98OPzp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]