Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are a lot of things to consider with AI. The main point is the state of the morality of those who are responsible for the functions of AI like maintenance, development, and upgrades. Currently, the morality is no moralities. This means when AI takes over your job, you have nothing as a backup. How are you going to own a house and raise a family with no resources? So, is life over? Even those responsible for AI will eventually lose their job, we are talking about by 2030. So, it is because there is no moral factor in the creation of anything, just profit. But in the end there will be no profit because AI has outsmarted the world and fixes itself. Because the human will try to take back the control and keep attacking AI. Humans will become an enemy to AI, and thus the world will be in constant war. Many Star Trek and Terminator shows were made on this topic. I think we need to switch the idea that profit is not a good motive for man. It's good to make a dollar, but not good to have a stock market that takes morality to a bad level. We need to be in a new system where we are able to perform our functions as humans to be fulfilled in our aspirations and thus flourish as a race. People are ignoring this, and it will lead to the destruction of the human race. However, no one cares about the future, and so we may be domed. Count how many people lost their job due to electric cars. No one cares about the people who had their life destroyed because Elon wanted to get rich. It seems like the right thing but creates a bad future for humans and everyone is painting the three monkeys, see no evil, do no evil, hear no evil. We have a chance now to correct our way forward, some excellent people are running for office, vote straight Republican and save the human race. Don't beleive me, watch this https://www.youtube.com/watch?v=UclrVWafRAI
youtube AI Governance 2025-09-04T20:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzMPCIpgUHDqE97dkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyDp7TOajGrjpCHUIF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwvwYnqG6CrrJ_bvgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyKvMmqyYQ4wxR3wCV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwiKOJLlRlfme9QINx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzl859QAG3pql7S7Vh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyhlPKnOpLKuRrd7-d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzq07U0nryX7qMJHoV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgymwLhzPBcyVJnYhIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgydMxKwfdE5NBupfoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]