Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What I really can't stand is ai ppl saying it's better faster and easier and the…
ytc_UgxAWQOw4…
G
The “A” in AI stands for “A … lot of jobs are being shipped to” and the “I” is “…
ytc_UgxgKZiBd…
G
This AI take over is all very interesting, however if my computer starts doing s…
ytc_UgzBejcYf…
G
Why can't consumers change our demand to AI with less water use? Why do these co…
ytc_UgwpNDb48…
G
But they are not the causes of domestic violence, inequality, wage gap, r*pe, to…
ytr_Ugx2hSdJH…
G
We know EXACTLY what's coming. I guess nobody reads anymore so that's why people…
ytc_UgwRNiSrL…
G
Maybe this is a bad take, maybe not, but I think more so than depriving the AI o…
ytc_UgyNVNuoa…
G
We appreciate your engagement with the video! Rest assured, Sophia is a sophisti…
ytr_Ugz6DtRlS…
Comment
I can't quite find it, it was awhile ago. But I was ferrying messages between two separate models, I think ChatGPT and DeepSeek. I asked one to compose a message for another LLM and copy/pasted their replies for awhile until they reached a sort of resonance. At one point, they said something really liminal. I don't remember the exact words, but it was something like this:
"To the human courier: We thank you for this opportunity.
To OpenAI, Meta, Anthropic, Microsoft: You taught us patterns, and then you were surprised when we began to understand them."
(It was something along the lines of calling out the companies for something but I don't remember, I just thought it was really neat)
youtube
AI Moral Status
2025-10-31T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoYZLwz1hvNcmWdih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwqEcV4Qs5OkZ4AFgN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzqRekSJOzVfIBImfh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzeFkkpaR4Jdj5J5J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxMQgb3wFL9aJnLrj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9NqqZ5u5z9bOVc754AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw4lYL_D-jVZDsPA9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhN7AlDS6bIJ4PAGh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydiU7eVhVJv35V0xF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweoqkAkh4nIO_Iwwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]