Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's at least in part because facial geometry and other biometric data is an imm…
rdc_hwzqtsa
G
Why do you call it AI if it doesnt have a sense of identity, or a conscience?…
ytc_UgzOUZdO1…
G
You don't need to do anything to hurt them. OpenAi is on the hook for 1.5 trilli…
ytr_UgyX2GUNq…
G
Perfect description. This video & the author (& of course, the bastards in power…
ytr_UgyvQRofl…
G
to those idiots who are thinking of programming emotions into a toaster: "DONT!"…
ytc_Ugjvd0kte…
G
The thing is, AI is better than 99% of humans. And faster than all of them. I wo…
ytc_UgxdCxs5t…
G
100% this. This is the singularity sub. All we care about is acceleration. OpenA…
rdc_m98fdru
G
One: I don't *want* an ASI controlled by humans. That sounds like a recipe for d…
rdc_kqsym35
Comment
I think there was a leaked story where a few AI killed almost 29 scientists in China where they were able to dismantle 2 but the 3rd was trying to connect to the satellites and search how to reconnect and rebuild itself. That knowledge would be available. Again, bc we are programming them so what we know they know. We have hackers so why is it so unbelievable that an AI could reach out into any computer and access it for information.
youtube
AI Moral Status
2023-01-14T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgwUSecP5c_EzHZsT1V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwfiB7InMtCa2CMNgV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwVEMU8VorhbU5w3mt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzGV9EdsMXNmQBaOzB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwDCPLHM6iI3YUp1JV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]