Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I do not agree... With the advancements in AI and their LLM's they are still dum…
ytc_Ugzmk-XC0…
G
This reminds me of the Y2K scare. Many jobs will indeed be replaced by AI and c…
ytc_UgzGoN3FH…
G
She didn't independently and randomly say she wanted to destroy humans. She was …
ytc_UgzHnIGqG…
G
F*ck AI. I will be creative forever. Art and creativity is about the person, the…
ytc_UgzC7DEqh…
G
Artificial intelligence is not "artificial."
AI has learned everything about hu…
ytc_UgwObgVLA…
G
Even before AI, most if not all of any sizeable corporation requires very inhuma…
ytc_Ugy0N8NGo…
G
Hundreds years ago Aliens broad virus to the planet Earth, the virus was knowl…
ytc_UgzwTcrWC…
G
I kind of think that it's correct/honest about just trying to use language to cr…
ytc_UgxoaO4rT…
Comment
Sadly, everything old is new again. The deadliest mousetrap in human history will be a Death Star, or a mini-version of this invention. Recall how the Romans felt it necessary to destroy a certain enemy, the Carthaginians. Hundreds of thousands of people were either killed or sold into slavery at the end of the Third Punic War. This was not the first or last example of genocide. AI will give tyrants a weapon that will implement a Final Solution that can destroy a city in under a week. It will be fired from space, and will not even require a ground invasion. Simulations are not required to create a Death Star. It already exists in the minds of mad men, and superhuman intelligence will make it possible. God help us all.
youtube
AI Governance
2026-03-30T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz42B6aPTrbxTWTfjd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxk6HDZiWNPUJdSkBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzgWYJPi5y_pExL1EJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwVkr4Mftu2gAYJgk54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwW4ECN5HuLeu2pKoV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw5NSpUvosXwT2PuE94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9vQYW7gZhBuF_Ek54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxCbHaIiS9D_Co0aQZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwLThoTDziW_FUJSvZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_q22XsCLkW1Nalsd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]