Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its like that saying - just because you can do something it doesn't mean you should. AI is coming, one way or the other it's snuck into many things already whether you want it or not, for me the questions are - why - why do it, why use it, and also for the governments - how does it improve the lives of the citizens you represent. Becoming a drone for the machine - data entry person - when your ability includes so much more but remains unfullfilled because the jobs that would use your skills no longer exists ,doesn't make sense, and on a large scale will lead to some serious issues for humanity. I can see that if the government owned the AI, then they would be rich and using that wealth to promote a better lifestyle for all their citizens. Leaving this level of power unchecked and unregulated by governments, not to ensure the benefit of all citizens, just the current wealthy, is key. Just because another country does it doesn't mean we should - just because another country jumps headfirst off a cliff into into the "mega AI" abyss doesnt mean we should jump blindly into it. let me say this - if we cannot control the speed and resources of AI happening in our country now - how can we control what AI becomes - and if we cannot do that then perhaps huminity isn't needed at all.
youtube 2026-04-17T02:0…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxECaIY8YffnlMiVi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzu7u08CrhdjXqEePB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugya_vBrgVnIRZd9nfh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyKS-NcG30KZaYQ83x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyQUwkMcPPMBAv9CNF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxexCxEB5GAcVkpQI54AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgynFyXP6w0pdumDKAV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwu3Q1Pvfi7dM_PZw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJgm25ImbjBaaKd854AaABAg","responsibility":"government","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgyJz3MUOEldWGAv9WZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]