Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is the obsession with Robots with Guns and self driving cars? Beginning of …
ytc_UgzoqwcZ_…
G
Anyone who lets new technology protect them without being ready to intervene in …
ytc_UgzXLt820…
G
We have bad actors in the world. Now at harm, humans, if you give them AI they w…
ytc_UgzLwBL0p…
G
AI wont need viruses, drones to decrease or kill human populations, judt cut off…
ytc_UgxlgPuRU…
G
@TheSirManGuy im not exactly an artist. But ai is taking ivermectin things and i…
ytr_UgwwKh9kX…
G
I wonder if men would be so obnoxious regarding deepfakes, if women, for example…
ytc_Ugz5g8FJM…
G
Firstly, I’d rather not interact with AI as much as possible. Unfortunately, my …
ytc_Ugwzcqqw8…
G
Can we all agree though that humans are already destroying ourselves at a rapid …
ytc_UgzqANr5B…
Comment
No matter how you look at it, I think we're already royally f**ked. When a sentient AI thinks logically, it will kill us, because humans are a huge threat. Now, if you train AI to have empathy, then it could spare us. But we all know how human emotions work, so when it feels empathy, it also feels anger, so when it's angry, it could wipe us out either way, so.. yeah.. GG.
youtube
AI Governance
2025-06-25T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYM8NwyoCis42zvLl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQbsfODWU5XpzZkgV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznGm6NQDj9xm53-Gl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzlpBzZuWiIY6AbCLR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwgSJMILOCvFfFZpF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyN0hefIFYjv2lO7TJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbEYlhdXIQIXsE2kx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxH4fAj6jUO6DETUNp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwXFVLaymu09bSVgld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_c3fE7uLha_PxKB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]