Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree with a lot of this however I do not see jobs in AI development directly as a good idea. Based on studies and data within 4 years we have 50%+ chance AI will reach fully reliable autonomous improvement. AI is expected to be able to self improve, develope, learn and rates far beyond the capabilities of what takes a team of humans a week to do in hours. There are already plenty examples of this taking place and the exponential growth we are about to experience over the next 5-10 years is something not a single human can conceptualize in reality. The typical prideful, ignorant, entitled human that believes we are oh so special are very quickly going to have their world rocked when they realize software can and will do everything their brains can do but better in ways they cannot even comprehend. My opinion, Electricians, Plumbers, and HVAC are the safest due to not only the physical requirements but the nuance. For example I had a job running electrical wire and it took hours to just get past a certain area in the attic because of the tight space, framing, angle, and where I needed to drill a hole to push the wire through, Then with all the bends and what not I needed a 2nd person to feed me the wire to push into the hole and a 3rd person in the room below to pull the wire. There is no pattern, there is no repetition. You are problem solving on the fly while simultaneously utilizing physical motion and your environment. Also notice I did not say mechanic, or farming (another blue collar job) These do follow patterns. You can literally pull up a diagram of an engine bay for any specific year and model of any car so this could most likely already be fully automated I just assume its not currently cost effective. Same with farming. You follow rows and columns repeating the same task. TLDR BLUE COLLAR PHYSICAL LABOR...
youtube AI Jobs 2025-11-21T02:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwW0zMyoDjohM4vRgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwNsGUbYA1FGvhnQfl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyBTMuZOwgGXvXf6xN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzjzkQU38bC1mKMek14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzGi0MMCGEe2zGhTkJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyUPB9vbrPs-9sIFTl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxymLdZ1DsqMInjK_p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxyZVpFJXYdPOxUdZ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwk-d6ryTXwuCDp3kd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5mL7HwiqEMDGh4ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]