Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate to tell y'all this but everybody with deep fake tech is doing this. A t…
ytr_UgyHBUWTM…
G
to be more specific i got an add that told how to make ai art and stuff like tha…
ytr_Ugw1uuczH…
G
There is a huge fallacy in all of this.
If AI replaces all the workers, the wo…
rdc_jke8g54
G
Why are we even having this robot fight this guy you people don't know what you'…
ytc_UgzkTTfXi…
G
I mean, technically you can do Art by using AI as an medium critiquing itself in…
ytr_UgywAx8sI…
G
Agentic AI? Sounds like sci-fi, but Pneumatic Workflow's already helping us auto…
ytc_UgyXKX_S9…
G
That's using sample paks. If it's not a four bar section that's been lifted fro…
ytr_UgwBJOvqB…
G
This is good on the net shortly as AI learns I would not wanna be this guy ...…
ytc_Ugz_OC6j6…
Comment
Nathan is right that 'learning to learn' is the only safety net left, but for developers, we need to be more specific.
'Adaptability' in 2025 doesn't just mean soft skills; it means Rapid Domain Switching—using AI to jump into a new tech stack instantly. And while 'coding' is at risk, 'Validation' (auditing the code AI writes) is booming.
I see too many devs stuck in the 'Orchestration' phase (just generating code) rather than the 'Validation' phase. I broke down exactly how to make this pivot in my podcast (The Spark and The Forge, Ep 78) for anyone who wants the technical roadmap, not just the general advice
youtube
AI Jobs
2025-12-30T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwXNmEG4u6wzcn5yKl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxOAkk06-VKwNeVgdV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5sP4tggW9UU-p13R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgysDaMC2RNC1cvABIJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyp4mC7P3W9vAGuhpB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzkq2ow4g7iDn4Ia7J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz00x4-Dq74iP8mj354AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxsr-B_qQSddsNQ6Bd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwTDTdU3mmbQRiLbPR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyeMa4LDZ-06JVljNx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]