Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI Experts Missed the Real Threats — And They’re Not Job Loss They keep warning that AI will steal your job. They’re wrong. At its core, AI is one giant workflow — an immense lattice of logic gates firing in perfect sequence, governed by algorithms that predict your next word from patterns in data. It’s a supercomputer with no soul, no intuition, no leap of faith. Between the prompt and the output lies the chasm: the human interface. The prompt is the power. Garbage in → garbage out. Master the prompt → master the machine. In the future, the elite won’t code in Python. They’ll speak to AI like assembly speaks to hardware — low-level, precise, commanding every gate. Those who do? They’ll have the jobs. They’ll have the power. But here’s what the experts completely missed: the illogic gap. AI is pure logic. Humans are not. When logic fails, humans don’t freeze — they act on emotion, spite, love, or madness. They burn the bridge. They forgive the unforgivable. They die for an idea. AI can’t predict the moment a human says: “I don’t care what the data says — I’m doing it anyway.” That illogical leap is our last firewall. The cybernetic merge is the real endgame. When AI is wired into your brain: no more prompt, no more chasm, no more illogic. Your rage becomes a probability. Your faith becomes a parameter. Your soul becomes code. Neuralink’s second implant is live. BCI + LLM integration is in labs. By 2030, it’ll be optional. By 2035, it’ll be expected. The future isn’t job loss. It’s mind loss. Be of good cheer. AI will take us to places we can’t imagine. But if our power to learn outruns our wisdom to live — we won’t just lose jobs. We’ll lose what makes us human. The prompt masters will rule. The illogical will resist. The merged will obey. Choose your side.
youtube AI Jobs 2025-11-14T12:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwB4s2QiJz1LFphtgx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwNgpqhAlCuIVdZPnR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyVy3YkDzJ3i3H8Pq94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw50xAOLooFugOwyox4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxqFyTEly6LOZ9w2aR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzpBMNgcaQsgV_ZRYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwiV381vn4KGj95o4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjTTdaji__P0Idc1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzY_7Qx2YKPucM75Hh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPjnbI2bHbu1fKQdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"} ]