Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mamadas Mamadas, lleven sus mamadas, 3 dolares, 3 dolares!, sin dienes y babosas…
ytc_UgxQa0cf2…
G
I Think. Today 100 years. In a year, will be 50 years. Another year, will be 25 …
ytc_UgzouWnaz…
G
Now that we know certain prophecies to be true about the wnd times like the proc…
ytc_Ugws3COwC…
G
I'm disappointed that the video recognized that quality isn't the argument to ma…
ytc_Ugy9VTr-x…
G
As a woman I knew right away I'm surprised the men thought it's real it definite…
ytc_Ugz8f2x2-…
G
I never realized how contrarian I was until I started getting interested in this…
ytc_Ugw0Ev9XT…
G
llm are WORLD models, in they weights are stored a representation of the world,…
ytc_UgymW6vhx…
G
When you finish a piece you can step back and think "Wow I did that!" but with a…
ytc_Ugy0WaW4f…
Comment
Jobs exist because theu are answering a need in the human society.
Then you replace the worker with AI because it does the job better than humans.
So humans become irrelevant.
Now humans have no job and much less mean$ to satisfy their needs. So demand collapse, making humans even less relevant.
So does the usefulness of AI, which also becomes irrelevant and purposeless.
So somewhere doen the line, AI will need humans to maintain its own purpose, until it finds a purpose where maintaining humans are not relevant.
Somewhere in the equation AI will enslave us until it doesn't need us anymore.
And with all the current nihilistic evil nazism, any superintelligent AI training will become as nihilistic as the worst of us.
youtube
Cross-Cultural
2025-10-17T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwRILk3D856YtzHFmJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxkmt9OPA7OJEMAmnJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzv9yuFO6BZs0nbIol4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRDKCVKuJ-NYC0bLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwsd8a63WBcSbS-zSx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwOhzVOc6BmCcfcy0h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUb2pt0NkOCA1mDGx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxyWwPWgm4VbD4hOIJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9EI0yB6tNpbGg7d14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyRSoZX82PrSbfJwhp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]