Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:04:50 THANK YOU NIEL!!!! If they’re ALREADY making hyper intelligent AI & it’s…
ytc_UgyeyTyI8…
G
I understand where you're coming from! Names can definitely carry a lot of weigh…
ytr_UgxzoAHaT…
G
Substitute politicians, bureaucrats, US congress, industry leadership, internati…
ytc_UgxXUIabA…
G
All of this reminds me of that one Teen Titans episode
Starfire: “You know wh…
ytc_UgzPLHzLf…
G
This is natural evolution. Humans have evolved to a point where they have create…
ytc_UgwciUdxS…
G
Just wait for Ai to start attacking real artist's weak points) Like mental healt…
ytc_Ugz2XNGAH…
G
Has this guy not heard about the bubble popping? Ai is gonna be a big bust…
ytc_UgxJ4nUOe…
G
@simonfernandes6809question is who can fucken afford it ? Sure you maybe have tr…
ytr_UgyYQoIdb…
Comment
It's both the case and yet not entirely. Agentic engineering has the potential to accelerate a single developer's productivity by huge margin but there a bunch of preconditions. In my opinion, first and foremost - detailed plans/prompts; second, developer has to know exactly what is going on, what the agent is tasked with in details; third - restricting the agent (what they CAN'T write is often more important than what they can); fourth (and seems to me as the biggest bottleneck) - AI will struggle in a shitty codebase, just as a human would; fifth - agents favor monolithic architectures over microservices atm;
youtube
AI Jobs
2026-02-12T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjLjZYBXyt8bM0cqN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzuy5MbeU-7vjZHXwB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxiCF_aIDTHOPNCE3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAhODHmj2X8vRMwW14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwl2Z6ApGARkP4QgdR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgylNvvDvlXzyTFlHSt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwg30dj_9hVRodbu6V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxse32eH632KlKvv294AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyQ6t3T8VvxMiQRuZJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMvyiw_EoZaiL3GcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]