Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I didn't listen to all of this, but it seems she is mainly complaining about capitalism. To say the purpose of technology is to help people flourish is true but modern-day capitalism has obscured this reality. Technology's purpose is to add value to the shareholder's stake (aka make money). This is another way of saying the same thing. This has ever and always been the case in all societies in all of the natural history of humans. Modern capitalism has changed the definitions of what it means to be shareholder. Instead of a small, stable, family-owned cooperatives (tribes/villages) we have large international mega corps. It is ever easier for those with more to get even more. The concept of capitalism has some serious reckoning ahead. It is already not serving us well and it is going to get worse as more and more people are getting disenfranchised by improved productivity. I absolutely agree that we don't understand what kind of intelligence the LLM inference engine has. We don't have an operational definition of intelligence that can compare homo sapien intelligence side by side with any kind of AI. If we asked a chat bot how it feels it would just predict what it should say based on the context. We have the athro delusion that we are thinking and that if we talk to a chat bot that it must be thinking too. This enormous walking colony of cells wrapped in an epidermis (me) is deluded into thinking that I am thinking just because I can carry on a language encoded conversation with a simulated version of myself. LLM backed systems can do that too. I just made one. I wonder if we think too highly of human intelligence. Maybe it is not that magical after all.
youtube 2026-04-11T20:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz3bJPoPfK54NIU3Ed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7iJPvEIY5We-5bFR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyejTrOK7-c8Kpz90F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx04wlecbGB0eanSWJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxZAeTzuHKmOAd42gl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxbPnVQ7PeZqMD3NNV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw_q6GpJlp_pBmLAtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgxWz3z6SYu7WjPNuCh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx9G7zK4re-uMlT-f94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzQ6FizkQWps3tUo9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]