Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know, it's so interesting to see how most AIs focus on tasks that originate in the humanities, like art, language, writing etc. But when it comes to mundane bs like cleaning the bloody toilet or doing laundry, there is apparently nothing in the works. Consider me a conspiracy theorist, but I think elitists and CEOs are at fault as always. They think that humanities are too expensive for what they provide. The CEOs want the art, the writing, everything their narcissistic monkey brains would never be able to come up with. But they don't want to pay for it, because guess what? These are skills that take time and hard work to develop, deservedly getting paid highly, because before AI these skilled people were never replaceable. On the other hand, cleaners and housekeepers while doing respectable, important jobs don't have too high of a skill ceiling. People working these jobs are easily replaceable, you can work them like slaves and underpay them. That's why there is no priority on creating AI for those unfun, ungrateful jobs. Narcissists, elitists and bastard CEOs being the bane of the world as usual and the reason for AI being developed into the wrong direction.
youtube 2025-01-21T21:2… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwfWNEUY0StT2-VQER4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxf_wuKSFt3Ye3QpD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyG-e26F_1KrV2RuO94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"sadness"}, {"id":"ytc_Ugw_5BM4NOggLJCJpiV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3P7mhQ71y64RgWt54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxGMUt1Hr6eW5hv6wJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxPAqyvV8Ff3_wIhol4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz155pULvawtpn14Hx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwFAi2BlwJjB06gwo94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwGZYWRCFHRIE_OAkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]