Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Another thing about Neils "just find something the AI can't do" position regarding job losses: I'm in my mid-50s now, working in IT since I left school, have a university degree in social sciences, so I'm not the worst educated person out there. I'm currently employed but would like to switch jobs. It's seems nearly impossible to find something. Even jobs I'm qualified for, they just turn me down because "I don't fit the profile". Meaning: I'm too fucking old to even be considered. So, it's not AI that is currently my problem, but no matter if AI is killing your job or some other reason: people over 40 will struggle finding new jobs. People over 50 will most likely be fucked once AI is really coming. There is no "find something the AI can't do". Just isn't. Sure, some will get by, but most won't. In the long run it probably won't be a huge problem, because 1. AIs need to be trained, and someone has to create the data for the training, so we can't just stop working and think "AI is gonna do it for us". AI doesn't exist without human data in the first place. And 2. people who are young or not born today will have their whole lives to adjust to this new situation, the education system will have time to adjust to teaching new skills, etc. People in their 30s or 40s or older can't start from scratch, not in the US where education is prohibitively expensive, and not even here in Europe where it's theoretically cheap and accessible, but who's got time for a brand new education when they have to pay the freaking bills? And frankly that was the same when the automobile came to replace horse carriages and all the other technological jumps. The people that were directly affected by these jumps were not able to adjust quickly enough in their lifetime. Only the ones coming after them had the time and opportunity to learn the required new skills for this new world. It's just like that now with AI, only tenfold.
youtube AI Moral Status 2025-07-24T12:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzH6TXipICLYs9pgFp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzyWf18CvHO95gfAoR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwX5YcFnlSjRPk1Vap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxCHXgRxtpPUg6cd994AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMUMBop0KNA46o58N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugywzy_0LtEhRErC4Lt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwsUjs54L74Xgnvgxx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxwELpb3zk4KZ5kEjJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyIB2DOo1JeJsdFHKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzaBNGj1b-H78DO7AZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]