Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He has an extremely gloomy vision of the future, and vastly overstated the current capabilities of AI. Coming from someone who already works with AI daily to increase my productivity (Gemini 2.5 pro, Claude Sonnet 3.7), if I don't baby the AI in most of the tasks, it simply hallucinates like crazy. And things have not improved a lot since early 2024. Yes things that used to take me 5 days before are taking me a day now, but the result out of the LLM is very average, and sometimes does not compile or run. Every single time, when the context becomes quite large, I start seeing some weird code lines. Or when I am asking it to generate code for a language that does not have thousands of documentation pages online, it starts inventing imaginary functions and attributes. So if you still want to be relevant tomorrow, become better at what you do. Worst advice ever given is to stop learning to code. If I were you, I would double down on learning how to code. I would even go further and learn how systems interact with each other. I would increase my architecture skills. I would further my knowledge in IT. And this will make you a better domain expert, one who can use AI to produce code and build apps and systems much faster, but most importantly, one who can spot when AI derails. Humanity still needs to produce vast amounts of work to reach its next evolution level, and any tool that can increarse humanity's productivity is a tool to be welcomed, used and mastered, and one should not cry armageddon at each incarnation of such a tool.
youtube AI Governance 2025-06-26T19:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwQDJe_JYxoE7N02GF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxwz98KAfC_xhyfLqp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZDDRX3STDZqqVo5p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzTGbMuQppLgfXhkmh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwRhVBsSGjCG1qM5H14AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBBxXQm1po47ss8Mh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7-b40ICbvesDgyA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzX4VArCqP3fFjl-hZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxU4UQ6uXbbaAksby54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6ew3IsVtDZnN0LwV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]