Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I realize this is an old feed and I'm just listening to it now but I still have some questions. Everyone talks about how AI can get rid of humans... For instance, introduce some bizarre virus that infects everyone in the planet before they know they have it. I can imagine AI coming to the conclusion that that's a good idea I can imagine. AI inventing a virus or the chemical properties of a virus, but I have trouble understanding exactly how it would go from the theory of the virus to infecting people. Can't load up a syringe or drop chemicals into a water supply... How would it make these little practical physical leaps? Yes, in theory all of these things are possible but there are many things AI just can't do. Hey I can't build a house AI can't change the oil in a car. There is a physical world that AI has to have access to that it currently doesn't. I've seen Boston dynamic robots, but just don't see them replacing humans yet.
youtube AI Governance 2025-07-09T16:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxU2bWA457z-QMpgvZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx4zzloMA9x2hVbbfF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxq2hJdasqFtSf20254AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYPDFeGPjEVi72wwJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxiHsIXSroI9pIBIoh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzNVCDhZlG2JmGJaiF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx80ZqjtFaPu1rArIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugzt5AGoDN7iibkaHRp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzFiOCm4Ap4bo9dKSB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz_OlHWYtfvWH-48fN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]