Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All of this talk was weird. Ever since people have been calling any machine output a work of AI, the sense of what we're talking about has been absolutely lost. Let's please start using actual words with actual meanings again so we can have a real discussion. AGI means NOTHING. It's got no generally agreed upon definition, therefore there cannot be a boundary that suddenly gets crossed by a new technological leap, it's a useless term that describes the fear of capabilities more than the capabilities themselves. Talking about "What we will do if it happens" is nonsensical and only serves the purpose of creating new headlines and or clickbait titles. "What if *thing* no one knows happens because of technological *thing* no one knows and gets *capabilities* no one knows?" WHAT THING, WHAT TECHNOLOGY, WHAT CAPABILITY? We don't talk anywhere NEAR this vague about anything else. If I said "Wiggledingses in the future will do a gagillion hypergugus and we don't know how to handle that. Who will be the first to reach the gabobblebob that undermines all of humanity?" That sentence would carry exactly the same meaning as all of this AI talk. Let's do a little recap: The big break through of the last ten years was throwing loads and loads of compute on all the text on the internet and somehow we tuned networks to output very human like texts. THIS is what we have. The whole reasoning thing, conceptually, is the model feeding its own output into itself to make it easier to strengthen the context, stay on topic and create longer coherent outputs. There's been no headlines about any of the major AI companies trying to tackle the AI game in a different way than making those networks a little more efficient and less likely to sound stupid by throwing gigantic amounts of data and energy on it - with NOT gigantic amounts of success. I've been using every major Chat Bot for the last three years and the outputs conceptually haven't changed. Yes, some got faster and more efficient, yes, they can now look for links on google. But for any amount of research that isn't answered by the first three results on any search engine, it's been just as useless as three years ago. Any concept that needs UNDERSTANDING in order to produce good output fails, because the core concept of understanding seems to not be replaceable by 'describing something that sounds similar'. I've been using it for explaining maths problems, coding, creating summaries and, just to test it out, researching hard facts. It has failed me spectacularly in anything that didn't have a Stack Overflow post describing the exact problem I've been trying to solve. Code of any sort with more than 30 lines is likely to even not run on the first try, unless it's well known functions that have been written thousands of times on the internet already. Hard facts are either perfectly replicated from Wikipedia or so insanely made up that you'd think they where written by random number generators. This fits the whole pseudo-intellectual theme of _learning nice sounding words_ by heart without _understanding_ a thing. "It learned humour" I disagree. It has learned loads of rhymes, it has learned loads of jokes. It has not learned how to feel out the room and place a well chosen sentence to lighten up a tense situation. It can't break the expectation of an audience, as it's core functionality relies on replicating what the expectation already is. It's got as much humour as a book with a list of jokes. Let's please take off those sci-fi hats and calm down for a second. We're pumping billions of dollars from all over the world into a handful of companies that do nothing but light it on fire, all in hopes of having a break through that no one can describe, to reach a goal that no one has yet.
youtube AI Moral Status 2025-10-30T23:3… ♥ 10
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx533xVo-hSoW3STyF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznUdxzETHRyzE4L8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxvpx4B5WAI1AG8d2F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlZs1Bk1mY4KiAxKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy0jut33-HQcZnXaWJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7SCNpTM5aM7M6FdF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLHDzE6jDrpPtKtnN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJOTqlMJWZtjCj7894AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFKq2YDOqwxlaeXqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgykcxymMbgSrsjYauR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]