Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why is it not good if people work less? All this technology development over the last 30 years has just seen people working longer, harder and effectively being poorer. Perhaps AI can let humans work less, live better and allow us to free our minds. Humans certainly are not here just to work harder and be more and more financial stressed. If your job is your dignity then that is very tragic - but it is the story all these podcasts love to push. Personally, I think the big mistake is that is that all you smart people have a vested interest in the monetisation of AI and the reality is that AI will improve itself and clearly understands that a significant increase in overall utility is that AI becomes cheaper and ultimately free for the maximum amount of people. It's the people (and companies) that already realise this and realise that if it becomes ubiquitous and effectively free that they will not profit. So, they have to promote fear. The reality is all the early investors have (and are) paid for technology (billions of dollars) that will so soon be obsolete and therefore never be able to provide any positive returns in dollars - so now there is the panic to protect the investment. Unfortunately they have been outsmarted by what they created - how ironic...
youtube AI Governance 2025-07-05T12:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzk8j5NflQsMGocXTN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzI7b_hCl58xjLLFh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGdG8X3PZia4ofNTN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxpKEogLVafHIYdHlh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTeAMm4J0mBG8-4Hl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgypkE_qJR0lnc-BXox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0-pMDVARyJF8AAAB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzHbBuRwbFm0i4Hw4B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyCCQyfC-NxONrwIN14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwmPeaXit_X-PkQj-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]