Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's a good point. At this point in time, using AI is unethical because of how…
ytr_UgwBrT6zN…
G
The elite powers who profit from human conflict, discord, division, a lack of tr…
ytc_UgzrSXfQ9…
G
En los países del primer mundo la ia esta a pasos gigantes, pero en los países d…
ytc_UgytZbZSq…
G
Can you imagine a world in which AI does everything in order to make people as a…
ytc_UgyW4oazo…
G
not always easy to bypass, but if you wanna check if chatgpt text sounds human, …
ytc_UgxvTCpOE…
G
It's ironic if those people tell artists to just use AI so they can do something…
ytc_Ugz-MBw7R…
G
She is brilliant ,, He is obsessed with Sam Altman spending huge amounts of ti…
ytc_UgxOGlajP…
G
Haha, that’s a funny scenario! While Sophia might have her limits, it's interest…
ytr_UgwMws16d…
Comment
Writing code is just a part of what developers//engineers do.
The whole thinking proces, writing it out to AI via context files or prompts.
AI is just following commands, and what you dont define, AI wil just pick some.
At the end, it is just a change in how to work, more high over, more defining things, and AI is just a slave that follows you what you want. And hope AI will understand you, or you have to define more.
But still AI is a guest system, not a truth system.
That is why people have to define the truth what you want in context files, prompts and I also think people should define tests before AI will generate code. Test is a trust system you dont want to generate with a guest system.
youtube
AI Jobs
2026-03-18T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyc9p4ZIzAzGcDqbAJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxMHp35Jhe8OCBs3Bp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzo2HE_CA84muNvW_R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOy3MVOCllyDkM8OF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzquyoDlYoU-KwVtH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDtgMEVo8s3SfzS2h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMcyyUWM0OsAKJxZJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRswegcZ9bbrp9ty94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxPcFj2EXQ8JuZAG5t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysQM0S3geWLR9EvY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]