Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy is so laughably off in his predictions I can't even bear to listen to the whole thing. If he wanted to sound alarmist then let him, but honestly saying that humanoid robots will be able to do your plumbing by 2030 and AGI comes in 2027 is plain delusional. This guy completely disregards the fact that it's even doubtful that LLMs can eve reach AGI level, their formula is basicaly reaching it's limits and we can totally see diminishing return in latest and greatest language models. Did he even try to apply these models and do any work with them, becuase it sounds like did not. They can't even remember what was said 5 minutes ago and don't understand it was important for the context. They are ineed impressive and streamline my work, but honestly we're nowhere near AGI. And then he just speeds through the fact that you need to apply these models into real world solutions to actualy do any meaningful transformation for any industry and we're yet to see great LLM or LLM agents adoption. I mean beyond you chatting with GPT to help you with your work, which is kinda where we are, there are no large agentic systems applied as a solution anywhere and we're already 4 years after LLMs became a thing. There's so many industries who hasn't even industrialised, so many parts of the world withour reliable access to water, power let alone internet and he's saying we'll see 99,9% unemployment in 2 years. Honestly I get a feeling guys like this have been lost in their research bubble way too far for their opinions on the matter to be grounded in reality.
youtube AI Governance 2025-09-05T06:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw1h7R_l9UBS8DQPkx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwm24MGLOjMoYXd8Fx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzpSAXqa6OhWcjdgXF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy-f5nk_54UPcBuCeN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1CloVCatBdICp38F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzBxXcujfxsGco05eN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxR5KPYJUdF-2zgepN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwD8QWpMsMKEP6lBu94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzVm9Bx8-GS27y4ovV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx-rciJuXC_pedG-094AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]