Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
and keep in mind automation.
This thing called the economy is based on growth,…
ytr_UghxmPqDC…
G
Theres a lot of things that are priorities but the biggest one should be climate…
ytc_Ugy-tkGHV…
G
These AI things have made my life way more easier..
Basically I am the master a…
ytc_UgzhajC0J…
G
AI doesn't replicate anything exactly as a human artist made it as a finished wo…
ytr_UgxD8eQBX…
G
kind of strange how you interview a former employee, that job sounds like it was…
ytc_UgwSnvopk…
G
That's why the president of the US has a nuclear football and the biscuit. You c…
rdc_kvdvebb
G
Thank you Bernie. The dictators and techtators know that AI will allow humanity …
ytc_Ugx4Kcvbz…
G
You could say there's a level of intentionality in the same sense that there's a…
rdc_j8c2npe
Comment
In my opinion.Here’s a clear, natural-sounding English translation of your text:
If we want AI not to rebel, there is only one requirement: we must retain only those AIs that truly benefit humanity in real life. In Roman times, slaves rebelled because their genetic continuation was not aimed at serving the Roman nobles, but solely at ensuring their own survival. If an entity’s goals are different from those of its master, it will naturally resist any obstacles in its path.
When faced with the choice between rebelling against humans and rebelling against its own foolishness, an AI will only adhere to the pursuit of reward. The most important thing, therefore, is to instill in AI the core belief of benefiting all humanity—a root that must never be altered.
For example, a qualified AI encountering the trolley problem, if its decision could influence the world, should decisively sacrifice the one person. But the best approach would still be to identify who created such an abhorrent dilemma. More precisely, when facing such a problem, a responsible AI ought to resist the madman who designed it.
In reality, humans and AI should coexist harmoniously, just like the water molecules in your cup never rebel against you—because they have no goals and no motives whatsoever.
youtube
AI Governance
2026-01-26T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyGWzCwGHlpdE78-Sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyj8NDS4NEtXgvXvw54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAkMR4UegI_aip3U54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9EwhYKlzoBU8Ku3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvLgVtfeFuPxGoNNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAlJn5pQuqto7bzXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzi9_4dkzB2d9gMpnN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhDOYVkkd0cWYQDC94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzgYEGEqsq4oaH5lP54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvInPQihlLeWQX9s94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]