Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AGI was defined as "AI that generates $100 billion in revenue for OpenAI" Money, it would seem is not the 'root of all evil' that dubious definition belongs, and quite rightly too, to the Human Race. If the goal set for General Artificial Intelligence is money, money to fund its own development then that is shortsighted indeed. Money for power, power for control, control for domination. 'You reap what you sew' and other such pithy cliches come to mind when contemplating humanity's hubris in its vain attempts to seek out the most convenient lifestyle free from sacrifice, discipline, hard work, patience and all the things that go into the better aspects of ourselves. Are we to give all that up just to be ruled by Robots? We seem to constantly be on a quest to banish all those good qualities from our lives, qualities that bring a sense of accomplishment that no robot will ever feel in the way we do. For what, money does not grow food, it doesn't make the seeds that grow food, it doesn't comfort your children when they are scared or hold your parent's hand as they lay dying. No, money is simply a tool to standardize the equivalent exchange of goods and services bringing fairness and cohesion to the dynamic nature of the process of bartering. It's just a means to an end, not the end in itself.
youtube AI Moral Status 2025-04-28T15:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwcBnJHuEUfXla0WS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwh6VTUVELEgCgYZ594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_ugcPUS1rJSfSkX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzBgAwfnpzM4-GEVnd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDhSXbaVFd8-74NMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzeW9cN4BKgeJqSMwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzr1H1qt2ydyg--8IN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9uP3ailRvKrZuIHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy9lumlptX_Pl8IFA54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJ0y0-RfxromYI0tB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]