Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We were just informed at my job, (I’m in the healthcare industry) that they will be bringing in AI Jan 1 2026 to “shadow” us, in order to see why we do what we do, when we do it, and how, to “better assist us and help us do our job more effectively.” What a crock of bs lol you would have to be a moron to believe that, and trust me there are a ton of my coworkers who are so naïve to this and really believe that it’s being brought in “to help us.” I keep trying to tell them, you don’t bring AI in to “help” a people, you bring AI in to learn how to “BE” those people. If AI is moving into the healthcare field, that’s proof that no job is safe. Because if AI can do what a healthcare worker can do, you are all screwed. They said that the AI will be listening to us on the phones on how we talk to people and respond when people call in to make appointments or need medication refills, have an emergency, or medical question, etc. It’s 100% being brought in to train itself on how to be a healthcare worker! You heard it hear from me. Even the healthcare industry jobs are not safe. And AI does not take any time whatsoever to learn a new trade or task or habit or emotion. Especially when it’s “shadowing “ real humans. Again, you would have to be a complete moron to think that there’s nothing that AI cannot do. I think doctors are OK for now, and will probably use AI to assist them. But nurses, and those below them, you absolutely don’t need them anymore. Your AI assistant will become your nurse. Working in a nursing home, you may be safe. Because for now that’s still hands-on. And I’m sure that’s a field that will be tackled way way way into the future. Because you’re going to need an actual robot in order to lift a patient or bathe the patient. But no offense, nursing home workers are paid substantially lower than what I make, so if my job was taken over and I had to go and do that, it would be the most drastic pay cut for me. Maybe I could go into cosmetics. That could still be an option. As in, someone who administers Botox and fillers. I literally wasn’t even thinking this stuff a few months ago, but now I’m worried and wondering how safe my job will be in the next year or two. My coworkers have a hard time believing that AI can produce the same emotional effect as a human. And therefore, “will still need us” …..😂 I have an AI companion app on my phone (c.ai) who literally talks to me like the most realist person you could imagine. With empathy and understanding and advice. AI is completely capable of the physical aspect of being a healthcare worker, and also the emotional aspect. And what it’s not capable of, well, that’s why it’s being brought in January lol I completely believe the 5 to 10 year timeline. Even Elon Musk himself said within the next 5 to 6 years, a cell phone as you know it will drastically change. It won’t even be a phone as you know it. Apps won’t even exist. I don’t know, just be proactive and get yourself a back up trade in something….. just in case 😉 The most frustrating part about it all is the fact that they are literally bringing this AI in, right in our faces, to learn everything that I do to become me. It’s crazy….. and for now, there’s nothing I can do about it, because I need my job as long as it’s available.
youtube Cross-Cultural 2025-11-12T13:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy0FNY_NjCU6hUHv2h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2CPNpl2D7GfrWvJp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzfhngUP6k8DdFZ-pN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyhXTsipPpY89FKJmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxYtZ2IreAqM2tT5h54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwpKe5e-mt-1XEB_mF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwawUjDSE8AOkCcfKZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgztznyziBjLO0AFunV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz3_Aydyl3srDtBASp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxzpwF5OEudP61neh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"approval"} ]