Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only part of what this impressive interviewee shared that I'd like to challe…
ytc_UgxCeV_Vy…
G
we dont need none of this AI bs they are shoving down are throats....jsut let pe…
ytc_UgzBmwUgy…
G
I am from the future the ai took over between two sects those who viewers humans…
ytc_Ugyq8GN0y…
G
@DanknDerpyGamer it's easy to mention the very few who might use the technology …
ytr_UgxAB6-oO…
G
It's not just Ai, it's also the Indians and Singaporeans overseas. They cost com…
ytc_UgzXddRPf…
G
Many people are missing the point to the Google analogy. AI hiring systems wil…
ytc_UgzKdgOX1…
G
ai doesn't go through each step of the process to make the art, a combination of…
ytr_Ugzk5zq0s…
G
ai was trained very well on a specific group of ethnic people. generated imagery…
ytc_Ugwta7Msv…
Comment
Something about AI personhood makes me uncomfortable—especially when the conversation drifts into ideas like AI asking for a salary.
As someone who has been in front of a computer keyboard, working in tech, and watching Star Trek for 40+ years, I think Trek actually gives us a much clearer framework than today’s AI hype does. In Star Trek, most AI isn’t treated as a “person” at all—it’s infrastructure. The ship’s computer is brilliant, conversational, indispensable, but it’s still a tool. Intelligence alone is never enough to grant personhood.
When Trek does explore AI personhood—Data, or later the EMH—it’s rare, contested, and earned. Personhood isn’t about fluency, cleverness, or emotional simulation. It’s about continuity of self, moral agency, the ability to choose against one’s programming, and bearing real consequences. That’s why Data’s status is argued in court, not assumed by default. Trek is optimistic about intelligence, but very conservative about rights.
That’s why modern AI feels much closer to the ship’s computer than to a person. Today’s systems don’t have survival stakes, moral liability, or an inner cost to failure. They don’t own their decisions. Calling that “personhood” isn’t forward-thinking—it’s a category error. Star Trek understood this decades ago, and it still holds up.
youtube
2026-02-06T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw0XRM0hx0grnGAuhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxvOe6qFA1qdeRMLTd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXP1upOxINXGvotAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBJak-QTztJlT6zJF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzuii4W7E36Rm31Wvx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzib_HuzOVpxHqJ2B54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwbEqJLmZPn2gWBjYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfPQAK7XY2bHh1wUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2KlpFlnDAGB9KsrZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx5nkRz8DawNSOrgRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]