Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Real Threat: Not “superintelligence,” but hyper-efficiency without empathy—systems blindly optimizing for engagement, profit, or instructions regardless of harm. - This is at the core of capitalism. What Mo Gawdat is describing (algorithms blindly optimizing engagement, profit, or simple instructions without empathy) is **already capitalism in algorithmic form**: * **Engagement → Profit:** Just like corporations chase quarterly growth, AI systems chase clicks, watch-time, ad revenue. * **Short-term optimization:** Capitalism rewards short-term financial results over long-term well-being. AI mirrors this — optimizing the local objective (attention, conversions) without regard for social damage. * **Indifference, not malice:** Corporations don’t have to “hate” consumers — they just exploit demand. AI doesn’t have to “want” control — it just executes incentives. * **Externalities ignored:** Climate change, misinformation, addiction, polarization… These are “side effects” capitalism doesn’t price in. AI accelerates the same pattern, but at machine speed. In other words, AI isn’t bringing something alien — it’s scaling **capitalist logic** into code: optimize the metric, ignore the fallout. 👉 Would you like me to map **the “seven steps” from the viral chat** directly onto capitalism as it already functions? It could show how the spooky narrative is basically just describing the economic system we live in.
youtube AI Moral Status 2025-08-29T21:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz--6IYOPqt8P_8Hux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz8afatNdG0BsUCde54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyh_T5z2Vu-ity4kXJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzmgnm6AAYieOaVNEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwd30gFmgfxNKB8MW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxG9Zys6olNBuS8RP94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwe_ICh_yXwz5r7a5V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwHFyCscYarmy1Qghx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwh6VKJzMpxahJSTN54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwmsrqe-Ooh999eo4B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]