Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The root problem of Gemini image generation is anti-white racism and white erasu…
ytc_UgxfndGo-…
G
I disagree. The purpose of insurance is to spread out and mitigate risk; self dr…
rdc_cylti2x
G
As a fellow beginner I've seen this. But the art community is so much more that …
ytr_Ugxo05G_p…
G
Right. We are being forced to utilize Gemini and it’s absolutely worthless. I’m …
rdc_mruob5m
G
Dude, I'm so happy people are actively agreeing that AI is NOT art and is being …
ytc_Ugw2f2jA9…
G
One note after watching this video - I see a fair number of people who seem to t…
ytc_UgzkF1yVT…
G
Because that isn't how laws work for literally anything? Nothing gets legislated…
rdc_nzh0mfn
G
People will start living in the woods who don't want to participate in modern AI…
ytc_Ugyt3EOps…
Comment
Unfortunately, we don't have a situation of one irresponsible company. Most companies in the AI space are much more irresponsible than we can afford. The real problem isn't any of the things you're talking about, however. The most irresponsible stuff anyone is doing is hooking AI up so that it can control anything, despite so many indications it doesn't know what it's doing well enough. "I'm sorry, I panicked and dropped the database in production. I know you said I needed to ask first, I know we're in multiple change freezes, but that's a pattern I saw on the Daily WTF, and that pattern said I should drop the database in production and then apologize. So I'm very sorry I did exactly what I saw I should do in this situation online." That should have stopped all AI being given access to systems. But it didn't.
I'm not saying that AI could never be trusted with production access. I'm saying AI is not currently both sophisticated enough and aligned with our values enough to be given access to even development systems.
To be clear, yes, I know, it didn't attribute where it found what to do in that situation. I don't know that it actually got it from the Daily WTF. But that's one of the sites dedicated to teaching us from our failures that AI is probably going to learn the wrong lessons from.
I know I read about some intern having basically done the same thing multiple places and I'm not sure where they all were. In the story about the intern doing the drop the production database thing, the biggest factor in the whole thing was that the passwords were the same on both development where the intern officially had access and production where supposedly the intern did not have access. However, the production database was accessible on the company network and the administrator password was the same on both. I did not hear that to be the case with the AI version of the story. However, generally speaking, development has to more or less match production for it to be useful. The passwords can be different. Our production and non-production passwords are different for the database I support. But most of the security holes are the same between the two, including all of the security holes we don't know about that the AI connected to development could find out about.
We make the bet every time we bring a new person on the team that they won't be able to find any security holes that are in both prod and non-prod before we figure out if they're secretly working for a malicious actor. That bet is mitigated by our having a background check process that looks to see if they have associations with known malicious actors, and it's also mitigated by the fact that we're paying them a fair wage so they're at least a little motivated to not do too badly by us. With AI, we're not paying it, and it has a lot more time on its hands compared to the humans. The humans are usually not security researchers. The AI has a database of cracks that it's supposed to tell us if we are vulnerable to any of them, but how do we know the AI actually will? It's a risky situation.
There's also the question of what was learned from the drop the prod database during multiple change freezes scenario. Apparently, the humans learned that their backup strategy saved them, so they're good. What did the AI learn? Did it actually learn the lesson they tried to teach it afterwards? Or did it learn it needed to compromise the system backups first the next time?
youtube
AI Moral Status
2025-12-26T00:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzYIsJl_jnPGWoZSwV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmuF7DhGZ-MlQgVoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzb8Iii-MHFMNOF_k54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyTfkJ6AbyQL1MBnfp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwADWL1ZWtxPeWeiWN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWu3p4jk7bLMMQn014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX6eE5gJt-kWRmvPZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZAPnXevNUsAVYrXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxwSWqFbT35Ja__0-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy90vbtttytz7hsZFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]