Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For me it is nonsensical technocrat thinking. AI is a machine. It has known limitations. It does not understand anything. It has zero creativity. It will not become threat on its own. Terminator movie and other similar were fiction. AI can, and it will be be used to manipulate humans (by humans with the help of AI). It is happening already. Stupid people are, and will be increasingly affected. Humans do not need paid jobs, they need food, shelter, clothing, transportation. New technology can provide it at very low cost. Money is accounting tool. The idea that without jobs we just lay down, do nothing, or die is false. Also, the idea of pleasure as the ultimate goal in incorrect. The political system is a big problem needing solution. We have huge parasitic ruling class fighting for their own survival. They see AI as a potential tool for further enslavement of humanity. People are waking up to it, and this dynamics is thing to watch. Obedience to authority must end. It will save us. Technocrats (like this fellow) are the problem. They seek power. They do not understand humanity. They think humans are robots, and man will become inferior to advanced AI robots. Some humans do behave like robots. It can be changed. We know what AI limits are. We are not fully aware of our own limitations. In general, with AI and robotics we try to make copy of a human. There are two problems. At first our understanding who we are is very primitive. Secondly, the technology has limitations; it will hit the wall. There will be new technologies that will take us deeper. The current level is preschool stage. AI is a good tool like bulldozer vs shovel. There is nothing to be afraid of. There have been machine breakers movements in every technological boomtime.
youtube AI Governance 2025-09-04T16:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxidNuvGiYMFPJUGAh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwRYGSYytgfHaV9XP54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwsi0Q7BxW0KNjGWpZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7LBtFEwwmLIGP_Vd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw3bZaOdLX7hVjInRR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0Pmi1IEdemVI3HMl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDEUSNpR64tWP95qF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz6Q7JSlghXDRbyVvR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmeCR2uLr3x3TpCeJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxn0tSTlLBuUkuTcnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]