Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work on AI and physics research. While we have made so much advancements, the so called "emergent" properties of AI, such as coding, is little exaggerated by media. Researchers were surprised at first because smaller models couldn't even write a simple functional snippet but as they increased the parameters and training data, this ability appeared to write functional code suddenly popped up. Although it's a bit controversial whether they are truly emergent or not among researchers. The training data still has code, so it's not like they magically gained the ability with size out of nowhere. What really improved drastically was their pattern recognition skill, that's what makes them write flawless code even without actually executing it. I should point out that today's premier models still lack spatial and temporal awareness compared to humans. Even multimodal LLMs like Gemini Pro which is the best at vision struggle with simple geometry problems. The reason is that their training data lack diversity of such simple problems. True intelligence is when the skills/experience/knowledge you learn to tackle a specific set of problems can be mapped to solve a completely different set of problems. This also requires recognizing patterns, but I believe AI will reach there at some point.
youtube AI Governance 2026-03-17T02:1… ♥ 18
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz-QyXSsGQeMWFQ1VV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy15kxUw5OCMZEMrQB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxrejwji932AoTruw14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwJfLLPePKZVeDHAJ14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxu4b0ilvG05zZZLmN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx5A_nVzUJcyFM7HQh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyyPvD8C9P2L3zKadZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBVjyRX7qRk2Hq0t14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzfMM8gnINCIFNDEPt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6ob7PvE4UnWyaWWV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]