Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
statistics is a coding problem, but coding is not a statistical problem. AI essentially assumes its a statistical problem, hence it will always be more buggy compared to a direct conversion of logic into code. My take on AI coding is simply this. Machine language => Assembler=> C /C++ => AI . Now this was possible because at each step (e.g machine to assembler) a bit of code written as a series of binary numbers gets mapped to or represented by a word. Say Add. That is Add=10101 for example. Every computer language bridges the gap between pure numbers/switches and human language. So essentially all AI is and all will ever be at best is a high level language. That said what is interesting is that when Machine =>Assembler => C and somebody writes in C. The conversion of this into machine is done by compilers. These compilers were nearly bug free. You never run into a compiler bugs that ends up making a wrong executable. In AI we call bugs hallucination and it is full of them. If a c to machine had that many bugs it would never be released. Sure we need to use this stuff in order to make it better. But in truth its got a long way to go before it can be anything like a C => Machine language compiler. That said I am prone to think that a logically built compiler from say English to c, will out perform AI and put an end to the use of statistical AI code generators. The AI story: A bunch of hallucinating people creating a machine to reflect themselves.
youtube AI Jobs 2026-04-26T10:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxW1UCuxe4k1kKYDmR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz8g1kFlxDXcpUxeHB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxL3mc1VfKz8ViwCLt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxoFVjEFNPV7o2IuSR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzN7wBoEr5CF6ulNtN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_Ugx2hU2RHyM3UNt1nqV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgzERUa7OwMOWAIlHcV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgwGDRtfdp66tVnZYGJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxFiD3lmDtqTpOAA7N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxloSLOpFrPyamoqu94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}]