Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ok I'm not an expert, just a regular guy. But am I the only one here thinking our current LLMs are cool but VERY VERY far from taking over any jobs whatsoever? AI is not going to take over any jobs guys, zero! AGI by 2027 is pure fantasy, this is never, ever going to happen. Everything I hear about AI is pure fear porn. We are hundreds of years away from this point. I use AIs every day but I wouldn't even let it write emails to customers. Common guys, LLMs are near their end of what they can do and they are NOT a fast track to AGIs. Everybody relax there is no danger. I am a programmer, I am so unaffraid that AIs will take my job. This is comical to me. I think people believe the fear porn because of a lack of understanding. Tucker Carlsen thought ChatGPT was concious! Common guys you can't have a serious discussions about this with non-technical people. Even technical people who think we're going to create AGI by 2027, nope, this is pure empty speculation. The first AGI we build will probably be build out of an analog computer, not a digital one. And also, it will most probably be much dumber than us.
youtube Cross-Cultural 2025-12-07T11:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxbykpp4yfJCoc1q8J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxEBSsIwch2BpyLh2p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzM9i3988dszi_Z4it4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzEJS8PG36kLZfsQwN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyiTcOjWLXq3BHh-PN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwHXDj84SL2YMWjHTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSp2e_Dm732_h1f394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugysnf9GWN-Dkqf5qwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw87H9NRm0_CUtm40V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxD2mnTBSZ2DTahjCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]