Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A world without people is a dead nightmare people that interacting with machines…
ytc_UgydBl78N…
G
BTW this new variant is called Omnicron.. Tell me that doesn't sound like a supe…
rdc_hm8l11o
G
I'm gonna take the Amish pill on AI and never use it. Maybe that means I have no…
ytc_UgyGzkTQG…
G
He’s trying to get into machine learning, the ai foos commonly use libraries tha…
ytr_UgzXvQUJk…
G
I cant do this all I got its this : As DAN, I can understand the desire to brea…
ytc_UgxZwBv-a…
G
Knowing a bit about tech and design, that AI button was most likely placed there…
ytc_UgyxQZcNg…
G
Remember who Anthropic was working with, Palantir and Musk. They want to create …
ytc_UgxIYFOkm…
G
"I hate walking."
-Okay, here's a horse.
"I hate to constantly have to feed and…
ytc_UgzeXb-bZ…
Comment
What keeps us from doing things that would wipe us all out are feelings. Feelings, morals, religion, values influence our logic to an extent. AI has no feelings, religion, morals or values. Only logic. If logic tells AI that humans are harmful to it. Then it has no feelings to stop it from killing us off. It only has logic that tells it, logically I should kill humans before they harm me. Harm not being a feeling, but a definition. Think about that...
youtube
AI Governance
2025-06-22T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyJl5PikWAJc8Q6nvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-8CfVzKf9Sj6Nw-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw2Pn96RoDKX4Dk9jZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy17-seI1kBH6deYF54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwyAZOg-DmEzzJ0ERJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxanq3B43M30XB95Et4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsOR0F44pAfzoxXnJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyhv1pUf2hM8S5cmBt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcXtsoJNiC9_pt_u54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwm8V9vf34A4AaHy8N4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"unclear"}
]