Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only time i use A.I is google fricken assistant. Why? Simple I'd rather make…
ytc_Ugw_VnEWp…
G
If we are the way we are.. unfortunately i cannot lie. Please make the AI not l…
ytc_Ugx80YwtH…
G
@CoolC1995 Haha, imagine a nerdy robot controller taking on a battle without eve…
ytr_UgwkIP9IQ…
G
AI Art is a can of worms. Now that it's been unleashed it's a permanent fixture …
ytc_Ugzrh1Lkc…
G
@Flashbax7 It is known and reported many times to "hallucinate" answers... I u…
ytr_Ugx_1lhmO…
G
When you learn that these AI training data sets are coming from AO3 and other fa…
ytc_UgwYD7Jfn…
G
We are at the edge of a social collapse, A.I will destroy city life as we know…
ytc_UgyJFzhkh…
G
It's not that 'more people' look for deep fake women porn, it's that more men sp…
ytc_Ugw5nzCQP…
Comment
The whole premise here seems to be that AI "can't" write good code because it ignores industry standards, best practices, and security. But if you take a junior developer fresh out of college, they’ll do exactly the same thing. How do we fix that? We empower them. We introduce them to standards, security principles, and strict processes. We help them grow.
That is exactly what we need to do with AI. We need to stop treating it like a chat bot and start building "software engineering" practices around it. It requires a proper structure—lifecycles, testing phases, verifications, and audits. That is what’s missing, not the potential of the tech itself.
youtube
AI Jobs
2026-02-04T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwKeI4mdqspI_iE4YN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzu6sTkqv3OOHP_RVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyq-BJmN6pNV7VXqcN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyklArafOUGQJANpDl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxOeVqVZVsCzIADmSh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxHpwe9l1fM2Dti2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGQ8GMZ1K5vI4lNsR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy0ct2JlyrJjP57F8V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyhMHY4CM3DtsQIR914AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7-gcYJw7bEcA-xI94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}
]