Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i didnt realize that "deeming" was now officially a legal term for factual, evid…
rdc_nta1xv1
G
My question is: why accept it? Can't we gather together and fight autonomous t…
ytc_UgztaoeAb…
G
The part that makes great engineers great is how they think about systems. This …
ytc_UgzobKxIU…
G
It is really important that the AI periphery (as it seems this panel is based on…
ytc_UgwaPegXI…
G
Ai becoming sentient is only an issue if we continue using it as a tool instead …
ytc_UgxDhJO4w…
G
Thankful to this interview 🙏🏻🎉❤❤
❤❤
It's very realistic and
I have already exp…
ytr_UgyrFkP0V…
G
This reminds me of the Star Trek episode where they poisoned the Borg. Guerrilla…
ytc_UgxTCfNe5…
G
As someone who focused in AI for my comp sci degree, Chatbots and Gen AI are two…
ytc_Ugyav7gyM…
Comment
The only issue I have here is that the whole thing people bring up is that "you can't program morality" and that's what makes robots attack. That's fucking stupid, because in most cases of fiction/hypotheticals of these robot uprising situations they deem humans immoral. That is the robot making a moral judgement.
That's fucking stupid. There won't be a fucking arranged robot uprising, these tech giants are fucking with us for a laugh.
youtube
2015-07-30T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugh3PVktsKFg83gCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Uggo0fVOZWLI5HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggmztviY8m9tHgCoAEC","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgjVOjIc9xnc4HgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiLykdI4thQmXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugjk_By0sXzqZ3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghmddVZf9EUMHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggJ1ITpg8SLrngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggfhQKYR5czPXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghkLZ3Ypih0MHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]