Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of premises Dr. CS makes bothered me so I did have a little chat with AI. This is a quick sum-up of the conversation: A lot of these “AI will replace everything very soon” takes quietly assume that intelligence is the main bottleneck in progress — but in practice, it often isn’t. Even today, we (humans) already do massive iterative exploration: propose ideas, simulate, test, refine. The slow parts are usually elsewhere — physical testing, validation, manufacturing constraints, cost, and real-world uncertainty. Making idea generation 100× faster doesn’t remove those bottlenecks, it just shifts pressure onto them. LLMs (and similar systems) are extremely good at pattern recombination — call it advanced interpolation if you like. That can become powerful when combined with iteration and feedback, but it’s not some magic shortcut to instant, unlimited extrapolation or creativity. Without grounding in reality (experiments, constraints, validation), you just get more plausible-looking ideas, not necessarily better ones: "Creativity requires constraints, feedback, selection pressure... Without that, you get nonsense, not genius". So yes, AI will absolutely transform parts of work — especially in software and other fast-feedback domains. But jumping from that to “near-term superintelligent agents replacing everything” seems to ignore a lot of very real, very stubborn constraints outside of pure computation. In short: faster thinking doesn’t automatically mean faster progress everywhere.
youtube AI Governance 2026-04-09T07:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxkR8kFk4g8AQKPvWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGiNK-UTcRcRWJBbB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwyCqQP9s8_RYnDmWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKiBUQdX39BMGPiYd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzKBSAbqr-SVR1g_oF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgylRDFlElEh3ZelIQx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzrnTO1DdGQ8RwNYu94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgypnXgcDN5HfavtrCN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy932JTkF5IbMs3DQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyghNaZ_KsnItaJNbt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]