Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You are totally missing the point here. Automation in the past was effectively targeted at (a) reducing labor costs (often by automating repetitive physical tasks) or (b) deploying dedicated programs designed to automate specific intellectual tasks. AI is completely different than past automation revolutions, because it is a platform that constantly learns and recursively improves itself at intellectual tasks. You couldn't automate, say, a lawyer 20 years ago. Today, you can train an AI on how to be the best defense lawyer on the planet. Keep in mind, AI today is the WORST it will ever be. It's only going to improve from here, and likely at an exponential rate. Additionally, you could argue that human-centric tasks are still safe, like care-takers and gardeners. When we package AI into a humanoid robot body that has been trained on hundreds of thousands of hours of those activities, say goodbye to those tasks as well. Our entire economic system is going to change in our lifetimes, and comparing the AI revolution to anything in the past is absurd.
youtube AI Jobs 2025-06-24T23:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxN927riwg_tvJoAp14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz_ts6cGCvgZv-Spvl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgygMTV9XMC-_AEmwbV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwfEHYlC_HV_BiUTyV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz9VVMIR02oNs1T99V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgycdUL7rIZ7NxFuWid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKaWOMmHly4LRlnWR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxPXMdza097wlSJlzZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7s5HeUOAMsgDefEJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxe7QDcWonSnCp0K594AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]