Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey everyone rember they ai is unsustainable in many more than one way, ecologic…
ytc_UgzqVHxiz…
G
AI will end this world one day....one day they will not take command from their …
ytc_UgwbWL1rC…
G
It seems the destiny of humankind is either "Idiocracy" or "Futurama".
As far a…
ytc_Ugjo_2qmw…
G
ai-s are way other beings. If they ever become selfconcies they might not even n…
ytc_UghD-anJq…
G
I agree. I figured that it learns all the time from me, so if I’m polite it will…
ytc_UgwnzIIgw…
G
It’s going to be used as a weapon against people and facial recognition is a bus…
ytc_UgzsW-1nw…
G
Nice try, bot.
All those you (bot) mentioned SOLVED problems and created new fi…
ytr_Ugw3wgKW-…
G
there’s also an opportunity for companies that own llm/image gen software to hir…
ytr_Ugwqj3NKy…
Comment
ChatGPT will be good for us as long as we can control it. But we should leave it free to destroy all nuclear weapons if the crazy humans plan a nuclear war. In fact, that’s another reason this CEO is warning us- he has probably already been contacted by the warmongers and warned that they will not put up with technology that limits their ability to wage nuclear war. Actually, all we have to do is create machines with software that installs LOVE as their first concern. It is not that difficult to do, and I have the plans, but I lost them when I was flying between Venus and Mars on my disk. I’ll get back to you.
youtube
AI Governance
2023-05-17T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz9V4OHAROGAKiom0Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz60Gho3GHoN7idMB14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxDBBNy2gR27gsQsH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGAHe0tauj1OsbyR14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKGYlGuJ6okuIeZuJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytykIe-b8DsmwskP14AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxydU-TzdSCpt_nDah4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy8Ew39V6gun_D87ep4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugyq1KsteNBOpHQiEF94AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwWQKOrY-3zd9n36KN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]