Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:25:94 I'm very biased that I do not like AI, There's so much I dislike about Eric's stance I can't cover it all. "By 2030 everyone will have an AI specialist assistant" I don't want that, if everyone has a AI personal assistant, then all the AI assistants are competing with each other and the people making it worthless. its like when too many people have a college degree having a degree becomes meaningless but your still expected to have one to get a job flipping burgers! Because companies use AI to filter thousands of applications its impossible to even get a job flipping burgers in the first place, all the people who will loose their jobs cause of AI will not have an easy time finding new jobs even if their were magically new jobs for them to have and historically those jobs pay less and demand you be more productive, the invention of the cotton gin didn't create more leisure time like they thought it increased the work load to maximize productivity, whenever something becomes a little easier you have it fill that time with even more productivity. The only people going to work less are the unemployed. I DONT WANT TO BE MORE PRODUCTIVE! god damn it! I want to be able to be far less productive and have the same if not better quality of life so I can put my energy and productivity in more important things. Unless AI promises universal housing for all so I can stop wasting the majority of my income and energy on rent then I'm not interested at all with AI. But none of the AI people are , their just dangling the idea that I can be more productive to make more money but you wont you'll just be more productive for the same or lesser pay to make billionaires richer.
youtube AI Governance 2026-03-31T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgySFcshh5-X4lItRtp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxIONaIMlo8WIvtpdF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwBllp6RiYXJ9Ita_d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwAZHHj5Pcq7qZBDjB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxN9moo7-hU_zlY4jh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyy-n3yIwhq58IeRLF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwxWNysMLGAieb7vPR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwsdV4pnJyKx0Yhhsh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzgIFfBoL5b_fXI3Yd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyxno_vJOKMob-l8s94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]