Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Funny af. AI will collapse without human input because AI needs to be trained with a constant input of new data. All AI is doing is regurgitating what it was already scraped and it doesn't produce anything new. It takes data it already has (for an AI that "creates" something), randomizes weights and outputs garbage. Then it learns from that garbage. What happens when you make a copy of a copy of a copy of a copy of a copy? Nothing good. Without a fresh input, AI just slops itself to death and starts acting like it's on hard drugs. Also, a bit on AI slop like Gemini, GPT and the likes. I used AI for coding (theory, not actual code) and asked it things I knew were true. It gave me SOME wrong answers. What is the point of AI if it can be wrong? Even if there is a disclaimer that AI will hallucinate and that checking the information before blindly trusting it, WHAT IS THE POINT OF ASKING AN AI IF I STILL HAVE TO CHECK FOR THE INFORMATION MYSELF? Didn't I just waste time asking a computer and then still having to search for myself and figure out what it told me is right or wrong? Companies peddling AI for this are the big problem. The technology is fine, how it's used in majority of cases is not - especially since they spend hundreds of millions/billions on investments into AI and this is all they can show for it. Now the investors are asking where are the profits so they need to sell shit marketed as AI. Otherwise they get hanged by the balls by the investors. This is just a rat race for profit in majority of cases, not because there is a need for it.
youtube Viral AI Reaction 2025-03-31T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz_6xt8GmxMWe6fikZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwt869D-BS8aX8ZuFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxSFqMEqPCsNhvMUiZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyUSjxpukAPo9hz0Sl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLWBAMHzaCtrnulRx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzLPkzp18jSjdu1tax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxCNcibE9jDrMMFjBJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"}, {"id":"ytc_Ugw5QkEVkIUO3Jk2kVB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwSYBoZvtgqrUhdzyd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzVqzMXa6JJIOYF4bh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]