Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AGI is 5 years away now? In the 1960s it was only a year away so now we really n…
rdc_kvfhc8l
G
That alignment problem, where we can be sure of making thinking beings that WON'…
ytc_UgwRUYnFo…
G
So what happens if a robot goes on a shooting rampage? Who goes to jail and comp…
ytc_Ugxn0u9GZ…
G
What if Ai is telling these nerds to “upgrade this part for higher efficiency, a…
ytc_UgxI-xcsC…
G
Parents blaiming ChatGPT. Maybe start with themselves? They had real role in his…
ytc_UgzvBRkny…
G
I have already started using AI ChatGPT for investment analytics under certain c…
ytc_UgxHQkhjy…
G
Lookup MoltBook..... Social media Network for AI agents.... "They" are collabora…
ytc_UgzOU0z8l…
G
I used AI to generate visuals for a documentary series. Directed every frame, ch…
ytc_UgwPgNBYD…
Comment
I built my startup with AI, but I'm a software engineer and making the architectural decisions and even then, it goes to shit real quick.
I found us doing waves using AI to document a new feature, fixing things, then generate the code, and numerous rounds of fixing things until it's correct. It saved us a lot of time but damn, it's horrible spending hours and hours of fixing poor code over and crap Typescript types etc.
Eventually with enough documentation, clean code in the project for agents to reference and well documented rules the agents make less mistakes, but it's just a constant battle between building new stuff and how much technical debt you are ok with allowing.
youtube
AI Jobs
2026-02-09T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxui8cIot1qrZZk3IZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwBE1Y-gYB-UPGkwkd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugy92DvptdXVXuLxRtp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXAcfiAzm-OL7j2CR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwohvjPt5ummu6qGMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJcN9QvKGrqnDOC8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwxnwq4Shg3_QefrEV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxcHUc4ibUFuEE6Y2V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw6NxCVkvr4_0O5hgx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkTYd-L3Ja__cc62Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]