Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will take over writing all the boilerplate code and convenience scripts. For example I needed a stupid debugging function that converted a lengthy enum into the appropriate string to print out. Pasted that in ChatGPT and had my function that would have taken me 10 minutes to write out by hand in less than 20 seconds. Another useful case: Give it a code, give it the error it produces and it can usually point you in the right direction. But AI ultimately sucks at generating anything code related with all but mild complexity. I could never get it to work for Haskell because it just struggles so much with the type system. It tells me that it has a function Int -> Int -> Int when what it actually produced as code was (a -> b) -> Int -> Int [ ]. Type inference for functional languages requires extremely slow and complex algorithms to deduce correctly (the Hindley Milner typecheck algorithm in Haskell has O(n²) and doesn't get any faster) and ChatGPT just cannot handle these types of calculations. ChatGPT is also notoriously bad when working with any low level language such as C, as it produces a ton of memory leaks and logical flaws (such as using sizeof() to derive the size of a void* when what it actually wants is to use the size of the allocated memory). It will excel in high level languages like Python because they prevent you from all of the pitfalls that you could encounter. But it won't save you in any low level task and human intervention will be needed for years to come. Getting the first 90% of that AI right was fairly complex but doable. But for each percent we want to squeeze out of it now it will get exponentially harder each time.
youtube AI Jobs 2024-01-15T09:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy1LM4zkmqFK7TqapR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNg66GI5c-siLY0El4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9o703oT9jp-W-lCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyo3J3fI3Y-p6p5r7h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyoOSUhb3j0QN1uYAV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzAzxpCT-E6KCTYhzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwb8gt1ZZC-0kmvfpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5VxK0LvTfi53LqjJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHLt5lEmE3QrE-b8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxrAEuuJuWk5MDfcNd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]