Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lets assumed ai chip is not embedded in the human brain in 100 years from now. Also, lets assumed money (human invention) still matters. Seriously. Most employees (looking for money) before and after the internet or even before ww2 only do repetitive jobs. They (majority of the people in the world) wake up, go to work, do their routine task, and then go home. Not everybody is entrepreneur, only the most educated one, their vision of the future is brighter. I agree about this, but this type of people is still eying for $$, need to compete with the alien/ai ( non-human who has much higher iq and think human is a real dumb bass idiot). With the exception of the most educated one, human's mindset are usually static. Oh, when human go to the university in the future, the professors are not human. Also, the podcasters, influencers, or youtubers of the future are all non human. They will trade among themselves, and we the human just act like horses, dogs because we do not understand what they are talking about. Hopefully we are not going to be quarantined or put in the museum. I am wondering why do AI want to deal with human, especially teaching them in the university (knowing that human is too stupid to get education (not helpful to them).
youtube AI Jobs 2025-06-27T02:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwYk1lmtOnE-IVCJZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyIuwaUXocY4V_4Nap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPMUcixL85B7PxH8N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyYzYiRHC-qy4TJ6aF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzfOzvlqgljpt7SgJV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwK7k9mDe7K8wHWg2h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxyAiBxv26tqJYY5oZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw6nCKFMdiXGbpTijB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxopA5Di21eX02gdYF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyBD7-6LecFrUUJqrd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]