Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He claimed Google search doesn’t need to be regulated because Google search did …
ytc_UgyOXbYRx…
G
Damn. AI does what you tell it to? Schocking.
Like, those examples about black…
ytc_UgzQSlf0c…
G
Oh yeah from that movie😅 from the Kid and the war of Human VS. Robots/ AI…
ytc_Ugy09RZqb…
G
It's the same as every country having nuclear weapons. That makes no sense cuz n…
ytc_UgyQkMsvg…
G
I was also thinking also that if AI and Robots make everything nobody would have…
ytc_UgzsfJ-Zo…
G
@hobosorcerer Poisoning doesn't work on loras in the same way it doesn't work on…
ytr_UgyZklviy…
G
The bullets were probably subsonic hand loads, extra quiet, but they stopped eas…
ytc_UgwlhsxZN…
G
the human problem, is that a human thought that AI was smart or accurate or good…
ytc_Ugyqs7hI1…
Comment
This whole discussion reminds me of what professor Andy Clarke was saying about our own intelligence relating to the Extended/Predictive mind thoery. He states that our intelligence develop with our actions and our body having a major influence on how our intelligence work. In this perspective, AI branching out int other way of thinging seems rather logical.
youtube
AI Moral Status
2025-10-31T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwKmmPGxX0kbQuFEa54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMBNzPv0YV5wk26Ll4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQBf2-ySXEmEPDvGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyU52dzUJ0cP6uLeut4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyzUj23QPCSQQm3bkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwznKZMqydHEd20M0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy6fUSAOw28Pw25Lrx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyfqT-dDAHuv22h8fl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwn4J8GVJfdW0tbAgN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlPWOP0shh9ZTXadZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]