Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Look gents, I work as a solution architect in big corp, well funded project. I have personally worked on projects that have collapsed away 1000+ FTE positions, even in one single new system. BUT! The specific things an LLM can do are bound to 'strict rules, structured output'. The kinds of roles that are plumb for conversion might be corporate lawyers who deal with contracts and regulations (strict rules) and have known output (contract validation \ updates). *Usually* the LLM tools work best when a human works out when they should be used. Even the 1000FTE system left 200 FTE 's to 'run' the system and provide oversight. Over time, we might use a second LLM model to govern the first. THE PROBLEM YT PEOPLE DON'T SEE : LLM based projects are running off annual budget allocations. All big corps have IT budgets cplity a thousand ways, but the big investments are 1) Maintenance of operations(upgrades\updates\etc) 2) Cloud Transformation (almost all companies are stuck midpoint) 3) Cybersecurity fears. Nobody wants to be in the news 4) Developing for future\new business. Only that last bucket is getting LLM cash, and that last bucks is heavily invested in just outsourcing to SaaS or building really bespoke business systems. It's easy to fear AI will take over everything, but, when AI spending on the IMPLEMENTATIONS are getting investments at sub 1% the rate limitation become structural. The only way for AI to 'breakout' of this throttle is for small companies like ubers to spring up and offer new competing services to old models. They 'can' be fast, but often faces regulatory battles by incumbents which also slows the adoption speed. I'm a Decel, I think AI will be overall bad for humanity, but that'll be bad WHEN it starts causing unemployment at a rate that is actually unsustainable. I'm fearful like you guys of this getting out of control, and ever vigilant, but I'm not seeing warning signs. The big model's are developing mostly in a bubble because us, out here in the real world don't have problems to apply the tools to, nor budgets until next business year to even get funding. If you want something else sobering, do back 10 years and look at the state of robots and self driving cars... and you say ''' Oh... our progress is actually pretty slow.". The changes in society are breakneck compared to technology right now. Next time you plug in your phone charger remember that we first got that USB A connection on PC's around 30 years ago. Go buy a laptop with 2.4ghz mobile i5 and 8gb of ram, a 256gb ssd, integrated graphics over HDMI, USB3, 10h of battery at 1kg (2.2lb). That was my Toshiba Kirabook from 2012, it cost me $800USD. If I go to the store today I could but the same speced laptop for about the same price, 12 years later. in 2002 I bought a XDA o2 mini, and was playing age of empires on my phone waiting for the train. 23 years ago. Tech is not moving as fast as we think in most spaces. We get little improvements and focus on those, LLM's are one things where we got recent improvements. Lets keep our heads on.
youtube Viral AI Reaction 2025-11-23T02:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxyHFc1Xh3wGJqpKHx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugz8-iCu8U1VhLWthbd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzAuY_l0gZtLiN_qS54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxR0dyOilBWIYADrH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyN2wRnLupH_YTxQuV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwDUwmPdrNNg1NR1_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwYTzvMP2G4gCU-i3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwdJs-KiFSOTaGfO3p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxd4QhbAyixbadOZYB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgytM2VXOUFF6YilBGh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"} ]