Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@simonriley88I don’t see how this is at all useful?
Ants are far from facing …
ytr_UgxVP508y…
G
There's a lot of smart people that make predictions that are completely wrong. L…
ytc_UgwLCuM5Y…
G
An automated death threat on our roads.🫣 What happens when it fails on our roads…
ytc_UgwotJKqg…
G
The culture that is developing AI is, unquestionably, insular and self-promoting…
ytc_Ugy7tUTaN…
G
This is an interesting way to say “Tinder wants your personal biometric data for…
rdc_ohzj3a5
G
I think it's worth pointing out none of these AIs "know" what they're doing, bec…
ytc_Ugztx5osC…
G
That's a thought-provoking perspective! The evolution of life and technology is …
ytr_UgxlEMN4X…
G
For some weird reason my mind traveled to the first movies of Planet of the Apes…
ytc_UgxmFw8dB…
Comment
The alignment problem reminds me of Megaman X. Dr Light (the creator of X, the super advanced humanoid robot) put it into a series of ethical simulations for a hundred years before releasing him. Naturally, he passed away during the testing phase, but he made sure X was good-willed when he woke up from the process.
However, a almost hundred years later some scientists found X (before he woke up) and replicated his design. They created robots with similar characteristics but without the ethical simulation phase. It resulted in a worldwide disaster.
youtube
AI Moral Status
2023-08-25T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwBU-RHHlWsZ9ZQcz94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_Ugx6lBLDFfwVE_N-yLB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZ96DeuhYKsTzKqdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjA2x2QvAjWdsMWs94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxUuH4qdif9yLOiTot4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYHq7WIq_7loDiaz54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzdxU1GZqL-Dal_-p54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMYdo3kBj8cHvpIjl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzac1_95XOyQvLf-HV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyOew1-BqVLET9Glnh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]