Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's a power grab. Best thing you can do is reject AI ultimatums, don't tell any…
ytc_UgwQqFtm2…
G
I think those studies overlooked motivation in human creativity. AI would always…
ytc_UgxERl9ou…
G
We're heading towards Soylent Green meets Logan's Run because folks vote against…
ytc_Ugw_Bek-R…
G
Things outside the light of mercy like physics and natural consequence will be r…
ytc_Ugy3QRxO4…
G
That was a fantastic interview. Very informative, inspiring, eye-opening, and ev…
ytc_Ugxup49Q-…
G
yeah I don't think any AI will let you do that... not anymore at least…
ytr_UgyQlnyn0…
G
I like automating the thing. If art is essentially reduced to personal works of …
ytc_Ugz4gyZ2X…
G
I mean, migration is a massive issue in every aspect of our country and needs to…
ytr_UgwA1RzUe…
Comment
For me personally I see this as a paradox, if you know what you are doing AI has little value, sure it can write a bunch of boiler plate or a bunch of test cases but it never does it the way I want. By the time I have got the prompt written to do what I want it, and accounting for the time it takes it to run every iteration I could have just written the changes myself. If you know what you are doing you are faster than the AI and if you don't know what you are doing AI is dangerous and likely not going to work out, sure you can do the thing where you are juggling a bunch of agents and getting them to do multiple tasks at the same time so you are technically doing a bunch of stuff in parallel, but that actively lowers the quality of the output you get as you are constantly context switching.
I find AI is only good when you know what you are doing, but its also only beneficial when you don't know what you are doing. I call it the AI use case paradox
youtube
AI Jobs
2026-04-15T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxOa1Cw5bSJA8Mk0fZ4AaABAg.AVhR0bIzy9IAVkhePgWMua","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyNu06lincyJs_xFEp4AaABAg.AV00t86QC27AV04r8gFO-y","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwX7qjzo1jevbrvHc14AaABAg.AUcRT4udkolAUe6MuUYlbD","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzS8Wdn7SpBw_ZdXPl4AaABAg.AUACWNeZRX8AUB2P00ozDQ","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzS8Wdn7SpBw_ZdXPl4AaABAg.AUACWNeZRX8AUB7uirU0u6","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwJTLw7078y9s4ALop4AaABAg.AU9yBfFlv0EAUT3WkhxmHa","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwJTLw7078y9s4ALop4AaABAg.AU9yBfFlv0EAVcQfwIUbYw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyaqoduY_PgcST1Sx54AaABAg.ATIrfjkca_HAUBdUU-1Bwt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxCrxQyrVCkrgsnEMx4AaABAg.ATGg6s1reHNAU365slz6gh","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwKIURSjBsqRHCjem14AaABAg.ATFgsE7q4eGATFhC7fzwdh","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]