Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey, I am devops engineer and I am using AI daily. World is not that much different in this aspect compared to 5 years ago. Yeah the tech is good, revolutionary even, but it is far from replacing humans. My CTO is fully immersed in AI buble and he would replace everyone if it was feasible. It is not. Maybe you can use AI as replacement for microprojects but as soon as 1 thing goes wrong, you are lost. Even for Gemini 3 it is impossible to do meaningful work independently on small codebases. Large codebases are just out of reach. And this version of AI is not on track to become AGI anyway. They are just throwing money at compute to build larger model on the same principles. There is cyclic funding, OpenAI is billions in debt and the only real benefator here are hardware companies (which are financing AI companies...). It will crash. Recently I was doing a presentation for a conference and decided that lets give AI a chance since its isolated project. I gave it stellar documentation, picked the approach (reveal.js + d3.js for graphs) and let it crunch. The amount of manual fixes I had to do for what the state of the art models screwed up was so high that I basically learned to use these frameworks and would do it all faster myself. Keep in mind that it was just a bunch of slides.
youtube Viral AI Reaction 2025-11-24T07:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx-Ld6rwkL2lM0NTXd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwrVn75NMzk4WY0D_J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgxYM8Eq_K5NYayGwot4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgxYEqC-xD6xl3QnHEN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz00uTlQrYVH6rhEwN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgysBKfHpSa4wA2p8FR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},{"id":"ytc_UgwQcxJUt3tZ39tnTOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzbDLbaIFuQ8WqU2n14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzq1RPPWXhmvZyW2El4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxpDK1y8KJmIN_r5Qh4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"indifference"})