Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The whole point is being missed. Think in comparative terms. The industrial revolution was say, a 10. AI is anywhere from 100 in the short term, to millions in the medium to long. If you haven't heard about it, look up Mythos, Anthropic's latest AI that they've admitted to. GPT 4 was trained on 1.8 trillion parameters. Mythos has 10 trillion. 10. Trillion. It can compromise pretty much any digital aspect of infrastructure. Banking, the power grid, the internet. And that's today, when only a small handful of people have access. Now, look up scaling laws. By 2030, if enough power can be generated, it is plausible that models will have 50-100 trillion. 100 trillion, for reference, is the number of weighted connections in a human brain. The very best AI has a tenth of that. Yet they have already surpassed us in math, in general intelligence, in coding and science. By 2030, there will be millions of robots working in production in Chinese factories. By 2035, we will be outclassed, outsmarted, and outnumbered by digitally intelligent entities. We are the analogue version. This video dramatically underestimates what is happening now, and what the real trajectory looks like. Yes, we will lose hundreds of millions of jobs. Probably within ten years. Hundreds of millions more than we gain. That is how the math of capitalism works.
youtube AI Governance 2026-04-24T20:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy3-902LFPpYPmjC_l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyeIEXcsFZAYD7tmyR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgynvzrQ79P0SxIQF6Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyW-6YdC0iUEvSjPaR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwNhdsiOOLBaCCp0at4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzloCNEcwgXXg4zRl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyB2m0F9WlzfR7dVAV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugyt7-rgC37uCGIKPU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw_02p3zxvnFiciTnt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzNm2Y74BRABa3aY994AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]