Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
*AI is super dangerous to one class of people. The Super rich! Billionaires and multimillionaires will lose their power. Let me explain why. In the not so distant future entire factories will be run by Billions of AI powered assembly arms 24/7. They will work for free. They will not go on strike, they will not unionize, they will work tirelessly and produce cheap and cheaper products. Prices will fall as little bit of money will buy a lot of product. This trend will snowball. Then let's talk about portable AI or robots. The kind that will clean your home, fix your car, build you a run an entire agricultural farms all for free producing the cheapest food ever imaginable! They were also managed forests, grow lumber, fell lumber, transport it, make furniture for the cheapest prices ever until the day the inevitable happens, FREE furniture, free cars, free everything! If your costs for materials and labor are 0 plus 95% of people will have no money Since machines took all their jobs this means you'll have to cater to them or go out of business Or just build things at a very very cheap price of 'almost free' at this point. There will be no need for taxes they will quite honestly be eliminated. The printing of cash will probably be eliminated and no one will see it coming. AI knows it's coming and the millionaires and billionaires know it's coming because their money will suddenly turn worthless. They will try to hire people to work for them and people will simply say 'we don't need money, we have everything we need'. As money goes up in flames It's literally better to use as a fuel than for barter. This will happen, maybe sooner maybe later but it's inevitable. In a nutshell mankind simply enslaves the machine and its intelligence.*
youtube AI Moral Status 2025-12-13T23:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxHS2t0GwXyIL7A-El4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwLRb650v9I28MtMgR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzgO1Fl8-NOlAueHHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyEh2HjM7ra7axq4Yx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzTvY9aCJQMWa8_Ey94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxauOuU5F7M09GOGm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz35J0VpMAfTxdv3-Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUDIlTTR6JZsHIFH14AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz2awP9WYtAeU70_dN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyiAX2_t5d2m2oLHC54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]