Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He argues for existential risk. I respectfully disagree. Flaw #1: "Scaling Laws Create Intelligence." The theory is that more data + more compute = a smarter being. This is fundamentally wrong. As I said, more data and compute → solving (in probability) longer tasks — that's not intelligence. Flaw #2: "AI Learned Math." The claim that AI went from failing addition to winning Math Olympiads is not true. Large Language Models are still poor with numbers. They have learned to use tools, not to reason mathematically. Top mathematicians like Terence Tao find AI unusable for their work. It generates pages of proofs that require days to debunk. Often, a single tiny error invalidates the entire result. This leads to the core problem of trust. Flaw #3: Jobs = Accountability. Here is the fatal flaw in the AI-only enterprise. Imagine replacing every process in your company with an AI agent. One person now holds the accountability for everything. You must either check the quality of every single output, or trust the system blindly. This is an impossible burden for one individual. Accountability must be shared across a structure. This is why we need more humans, not fewer. (Charafeddine Mouzouni)
youtube AI Governance 2025-09-10T00:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx8hRVSWKPGZgubCax4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzqfnrC0bP_pHKlbj54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyvoQSU_MjFP8i40-p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxpgqr8ouNmrGs5fcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlR24E335sOvhewBd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxt3iP4tQrZGInkS1V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxutsU4mLOA6QkZN3B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyo3yfqOMMnorJp3bd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxKKk3saXgD8Rfgwox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzl1pS-a5KtizO61gZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]