Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First AI needs really much electricity power to work. Next, it needs much money to establish the technical infrastructure till it runs. Third, you need the experts to bring the AI forward. All these things are not possible for all nations. Many countries focussing on cheap labour and cheap mass products to produce and sell to the masses. This is globally the economic status. If you take out the possibility that billions of people making their living out of their workforce, than you have billions of angry, frustrated and violent people... Then there will be a world wide revolution wich wipe these technerds out, inclusively those billionaires and their politician puppets. All these AI tech promisses are like all SyFy crap, in the end the practical use is for the dustbin. There should be more focus on how billion of people can participate from the actual created wealth. There is no need to have a small group of some thousands of billionaires who live in absolute luxury. And how many trillions were pushed in the weapons industrial complex for useless killing products? Every person of the world could live a good live, in dignity and prosper envirorement, without hunger and the danger to become homeless, with full medical care. Specially if the work (blue collar work) would be payed properly. I don't talk about socialism or communism, these are evil ideologies wich killed hundreds of millions people in the past (Natiolsocialism-GermanReich/ Communism- SovjetUnion-China-Korea). But a fair distribution of the profits is essential for our future prosperity. AI might help as a tool, but it's not the solution. All these hype about AI is made by people who don't know much about mathematics or informatics science.
youtube AI Jobs 2025-10-27T09:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyHElriiaORgQkt-f54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzYZiGhcsFo6D9fwN14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz7okWxVBVGfhSNC7N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwFvyn9RuIOmPpTfNh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw17e-fNu0nGhDhe-V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxuQnHPtgFx2MKRLcB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyq_G8B_xZHZMu-LiJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjhAtyyaLkmGtLgax4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxqrW364AdBCOZ9X5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzE4JdZWPABg9VvIHh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]