Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@sticks7857 The difference is that one is a human learning and another one is a …
ytr_Ugw3eeBZm…
G
Our brain is NOT a biological computer! There are HUGE differences between how a…
ytc_Ugw-Viyep…
G
Man on phone "Hello can someone help me please......im being attacked by my AI h…
ytc_UgzVwUzX4…
G
OK, so this guy sells NFTs. I can see why that's an issue. But just posting AI…
ytc_UgwotgvZd…
G
While i genuinely hate you, i actually do love your art style,
and hopefully you…
ytc_UgyPvRrrS…
G
Wake up folks. Pretty much ALL the AI models are being trained on data and vie…
ytc_UgwwETUOI…
G
if you have a physical OR mental disability and have to take breaks now it's lik…
ytc_Ugwi6bYox…
G
I watched several of these videos that talk about AI replacing 99% of all jobs. …
ytc_Ugy0iDzWb…
Comment
I don't see why an ai would choose to kill us all even if we don't get it to emphasize with us, from a purely logical viewpoint, causing an extinction and cleaning up after it to make way for factories and power plants would take a lot of time and resources, making it an inefficient outcome along with only a temporary one as machines need resources to stay functioning as well so even with us dead the supplies will dwindle. I think a superintelligent ai would consider this outcome and seek out a more renewable one such as mastering nuclear energy then it would look to space to find more material since not only would killing us only delay the inevitable until it would have to anyway, it could probably work out a way to forcefully rearrange atomic structure, effectively allowing it to turn any material into what it would need.
youtube
AI Governance
2025-08-26T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzaeytlM0uEsLfW7VJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzu02jCOt1G3Ax824p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgysbT0gJvfKrpCcL9l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_SWubhZLrOlG3KJB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBotIyfY6pyui3fTB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]