Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well yeah, AI is a very fancy autocomplete and it's ok. What's not ok - is that …
ytc_UgzNMhImp…
G
If you can't tell by looking at his face that he is off.. you are off. That's th…
ytc_UgzBdkz97…
G
“Born with a gift”
Bruh do you know how much fucking sweat and tears I put into …
ytc_UgybVrPIh…
G
the answer is no. No matter how advance a robot is it will never become a living…
ytc_UggjnKNVI…
G
Nope these robots should be turned to scrap metal immediately seen too many term…
ytc_UgwqvcZo0…
G
The adoption rate is bs. You can`t consider "hundreds of millions" using Ai, whe…
ytc_UgwCUZF-P…
G
Make perfect sense when I was scrolling on shorts guess what the person said to …
ytc_Ugzb54CIh…
G
I assume by "A.I.", we're talking about the binary computer algorithm one, and n…
ytc_UgyZiFnw8…
Comment
I honestly think that we are subconsciously trying to take ourselves out of the equation. There's much talk of the human connection, what's real and so on, but I don't think that's much more than self preservation. Meaning its value lies in the reflection on yourself, not at some higher level, which is the same for things like cooperation, love etc. I think we're entirely self absorbed, biological machines, and we've just about had it with the tyranny of other people, who are notoriously unreliable and good at disrupting whatever ego trip we're on. In other words, not only _can_ a different kind of machine replace other humans, we'll actually prefer it if it did. Put _that_ in your narcissistic pipe and smoke it.
youtube
Cross-Cultural
2025-10-21T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz9mY_-olNwW9OhyiN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx0uUqEDdy3Y7H4cOF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwr90WVeJ_wnSJ5WMR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMT01cIX09y8EC5094AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzMS8geNUE_8JdEI4x4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugws7SDe5ti7ExLif-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyK9cqF3E2i6wZDi994AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgWHyuRSakOQuxVpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgySxOQGNKe4bXI_rad4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwsR1SM872kzRsav2N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]