Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My biggest desire for self driving cars is for my own car to drive for me.
The o…
ytc_UgyCtWVNR…
G
Ai is stolen art from all over the world and internet, all that you see/generate…
ytc_UgyTSqvZK…
G
Speaking as a tesla owner who drives Uber all over Maui, Hawaii. No way, will se…
ytc_UgxCNwLgF…
G
We haven't even created AI and this is all starting to feel like some futuristic…
rdc_cthxmwx
G
This world won’t be destroyed by AI, when The Lord says it’s enough he will be t…
ytc_Ugw52g8M0…
G
@JohnSmith-x3y8hlast to market, last in safety, last in QC, last in self driving…
ytr_UgyirUhzh…
G
For AI it's hard to make good background movement, like people walking in the ba…
ytc_UgzQ57YlX…
G
I support AI but this is actually great, because AI is not art. I'm not an idiot…
ytc_UgylIB51W…
Comment
"She can write 5x as many letters. That means they will need 5x fewer of her."
False. Or at least, a boss who doesn't take advantage of that across the board to make his or her company more valuable is literally wasting their time.
Literally every industrial development in the history of mankind has not reduced the need for labor by accelerating productivity, it has only increased GDP/profit. None of them have reduced working hours, either.
> "What remains?" "Maybe for a while, some types of creativity. But the whole idea of superintelligence is that nothing remains; these things will get to be better than most everything."
False premise. What drives creativity (besides "bounded randomness", perhaps) is knowing what is valuable. To paraphrase the great Billie Jo Armstrong, “I write music for me. If other people find it valuable, that’s a happy accident.” No AI can know *fundamentally* what is valuable (some would argue that an AI can't truly "know" anything); only a consciousness can, because only a consciousness subjectively experiences joy and pain, disgust and delight, ugliness and beauty... and those are where notions of "value" ultimately come from. Having worked with AI for years now to write code, I can say with certainty: They know the answer to almost everything, but the value of nothing. As it turns out, a very sophisticated next-token predictor that is trained on a ton of mostly-coherent data will *seem* to be smart, because much of intelligence may be mechanistic. It will seem to *echo* values, but that is only because of the training data.
> "I just don't like to think of what could happen." "Why?" "Well, because it could be awful."
So he can't name the monster, but he is still afraid of the monster. What would you call that sort of thinking? Baseless pessimism? 😆
youtube
Cross-Cultural
2025-10-22T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzCuevMMJRceV4TTaZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMP9mD9t8hg6yZbZd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDFNf8hFYyFYjQoi54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTbRs-TgYHGUFIVZx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxaLd7Lb7hKLPc8cuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxIITIij7Gk_ulxMsZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx60DCvqjELIpjgawp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz_pOspk3eBEiwlhct4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxpRkondPJFSaIts_F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxe6oQ3wHdSbuU96H54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"}
]