Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Typically, gamma is viewed as part of the problem, not of the algorithm. A reinf…
ytr_UgyJMpeIH…
G
Im not opose to a.i art but i think that and human art have to be in 2 different…
ytc_UgwpbioPD…
G
For the past 100 years we have expected our engineers, especially the electronic…
ytc_Ugzc1actt…
G
Is Fake AI !!! Is just a chat bot talking shit !! Sophia is not a LLM…
ytc_UgyC3RynS…
G
An AI does nothing until you ask it a question. There is no processing going on …
ytc_UgwtAB3a2…
G
A lot of this wouldn't be that much of a problem if we weren't living in a socie…
ytc_UgxsKXMf_…
G
Why aren't all AI models hard-coded with Asimov's 3 Laws of Robotics? This shou…
ytc_UgxeBwA_8…
G
Only place I’ve seen AI Learning used in a “Good” place was this post I remember…
ytc_UgxUzgpZ-…
Comment
My current firm (small tech startup) actively encourages using AI like Anthropic Claude. They pay for api access and we’re encouraged to use Claude to write code. I will say it’s helpful for completing mundane tasks in a few minutes that would otherwise take a few hours to complete. However it’s definitely a trap where you don’t want to, nor should, be overly reliant on it because if the AI gets something wrong (happens more often than you’d think) then you don’t want to go down some rabbit hole of code changes and have it all be wrong.
youtube
2025-08-18T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxmJQKnGhdmS4zuFbB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"concern"},
{"id":"ytc_Ugy-JQ81Pw_pbryYNlt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugw9Aq6hbdigOku5zy94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwkMJH_CsfzhT_G31V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwxwxyiWzInapaiAEZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzbclroyzmZXTpkLT14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugxdz2szASFRxBa3UbR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLCH1DDxeyN4KxWzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLols8LnVk3uACLwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwgto3N2so22XG6_nd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]