Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You say AI (i.e. plain English) is just another level of abstraction, just like C is a higher level of abstraction of assembly or Python is a higher level of abstraction of C. I think you fail to see the difference. All these level of abstractions are still *deterministic,* meaning that if you introduce the same code in Python (or in C, or in assembly, or in whatever programming language) you will always get compiled the same binary code. AI is different. *Your inputs don't guarantee that you will get the same result.* It's a non-deterministic programming, it's fighting against the reason of why computers were invented and it's, in conclusion, plain bullshit. Human language is rich, it has metaphores, it has ironies and it has a lot of literary resources that makes it unfit for actually writing deterministic code. That's why programming languages were created and this is why math language, algebra, symbolism and first-order logic was created. Writing software using a non-deterministic language model that *guesses* and does not *reason* is surrendering yourself (and your software) to the lazyness and the incompetence. We used to say that computers don't do what you want them to do, but what you tell them to do. Now with "AI" computers not only don't do what you want them to do, but they also don't do what you tell them to do.
youtube 2025-04-18T12:5… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyNpo-8l8DE-OkkszB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJHpYTVWLdLtjdGKR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwOVg_ECkSFCDoMZBl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwkDk5QKyr5rpB3aZN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGNTEPLYDYdpRNiqh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxL6B5WXJXY5-kjmtl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzix_Y9ZI3j35M3D_F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgybaeihLsKbxGTZb4d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzOFiru-6k9uvY6uyt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyWOtoKQviCDeKlEdZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]