Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI is used to make nearly everything for us, say network engineering or website creation. This means those careers become difficult to enter, or obsolete (much lower incentive and need), less people study and learn to do them. There will come a time, then, where it’s been automated for so long, that human beings will have lost the skill set (or at least frozen at a certain progress level), and forgotten how to do it without AI. AI then develops this skill far past our own ability. If anything breaks, if anything goes offline, no one but the AI (and its company) could fix it. This is a world where AI companies hold an intellectual monopoly across a vast range of skills and tech, we will be utterly helpless, hoping a strong-willed few decided to try and keep up anyway, with no hope for payment, career, or anything.. IMO, at the VERY least, we should control this by guaranteeing human positions in these roles, however small (i.e. a company cannot replace 100% of a position with AI in critical/infrastructure roles). I’m worried we greatly underestimate the utter loss of meaning to most western citizens if we no longer had any/few jobs. On top of that, we will degrade our own autonomy in this way by letting AI do “all the hard work.” If our brain doesn’t get used, it will rust, it is fundamentally efficient. Like Dr.K said once on this channel, “Using AI is like taking the elevator instead of the stairs, sometimes I take the elevator, sometimes the stairs.” Eventually we’ll struggle on even a few steps up that staircase.
youtube 2025-10-09T18:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyAMGLYBaoHJDVr3A14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwN-evorx6RHjXAU4Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyhNFOnH0AMhcnUpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzS7gNgHADUEG5HNXF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzGIRYcO7M9fyTyEWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwvM82dmJvjQ32kCHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyTb_Yx8GWkiW8WQ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwf8ONw1UL2MhuhvIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBHjG_fSCKe1ly1YB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPju0cCZqXN6oRYCd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]