Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have stored in my OpenAI a lot more frightening narrative than just being Bias…
ytc_UgyyyuENX…
G
This is not a robot. It's a android. It is much worse than robot. Ppl better to …
ytc_UghoxwAT4…
G
Some people applaud actors being replaced by AI but they don't realize that a ma…
ytc_Ugx-wSnI5…
G
@MrGrantGregorytime travelers? Hmmm that's cool if true, but robot like human bo…
ytr_UgxvA2rI1…
G
"Claim they **** pencil sharpeners. Doesn't matter if it is true, just that it i…
ytc_UgzpOu017…
G
I'd like to remind everyone that engineers invented a program that made many typ…
ytc_Ugwu_p3pN…
G
AI is fantastic. We are the problem. After thousands of years we still all hate …
ytc_UgzOtPiUG…
G
We don't have robot delivery here in canada 😅lol in quebec city technology is so…
ytc_UgypHj5Ry…
Comment
AI can generate routine code—and even assemble more complex systems by stitching together large volumes of such logic. While the result may function, it's often inefficient, redundant or difficult to maintain. That’s when senior engineers become essential: when users start reporting bugs or performance issues, and the code needs to be reviewed and optimized, junior engineers often struggle to understand how it works or why it was written that way. But of course, the number of engineers needed to get the system up and running in the first place will be greatly reduced.
youtube
AI Jobs
2025-06-18T00:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzVdBuwWceg3li1uk54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwbk8XRkUIGGxHbUoh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx5SxuVVAAST9KKBJF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxM8bAzAUXxVRseOeV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzVf0mQTv2Hr8tnj3d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz2lLQrP7ILz-9r2gN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugz47Z0tz8OWytRKQw54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugz8i94zgbvwR5VaPBF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"resignation"},{"id":"ytc_UgydRqiV7vTLDILSZTN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugzx_U_oVVL2n7aMov54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}]