Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@HelpImStuckInCave Yeah, you are right, I gave them too much credit. Comissions …
ytr_Ugw1zAQ9B…
G
Heed this warning, AI is not going to be "helpful" it's a way to move the rest …
ytc_UgyzwxCJO…
G
I already have a DOT sticker over my front phone cam, and a slider door over my …
ytr_UgzCuOAv2…
G
if any ai is going to become legit conscious, it'd be one made by some random pe…
ytc_UgwrCAYji…
G
I agree; the quick fix I've noticed in an attempt to lessen the mess you've desc…
rdc_nu77wjw
G
What is the purpose for the police to use face recognition? Why do the need it?…
rdc_exfjhfv
G
The guy been jealous on the other robot he in love with Sophia. What if she find…
ytc_Ugzc_CZw9…
G
@whynotcode okay so you know one line and you repeat it, completely unable to a…
ytr_UgwlAb7ZZ…
Comment
(Before this message is deleted a second time, it would be good to understand the reason why it is being deleted):: It is commendable to stand for doing what fulfills us and what we love. We should not need an apocalypse, or a simulation, to defend that.
It is also commendable to put on the table all those issues concerning algorithmic safety, as well as to focus efforts on achieving narrow, specialized versions rather than broad or generalist ones.
However, it is only fair to adjust the concepts we address and the timelines we refer to.
Algorithms are not intelligent, in the only sense of intelligence we know—that which is tied to life—nor do they in any way constitute a different species. Algorithms are a human construct (there is nothing alien in them), trained on an immeasurable volume of data, but born and shaped from the human worldview of the universe. We should not, therefore, fear a cognitive gap if we understand that the only model of the world they can reproduce is our own.
That said, we must not interpret this as something entirely reassuring either—we know that we need neither another species nor a different worldview to drive ourselves toward disappearance.
On the other hand, we are also told that it is impossible to predict, or that we do not understand what goes on inside the “black box,” while at the same time predictions are made and heavy efforts are made to convince us of a given timeline. But no. There is a reality to the facts. If the prophecies of algorithmic self-sufficiency and their projected growth were indeed as claimed, the omega point would not be next week, the day after tomorrow, or tomorrow… it would be right now, today, here and now. What truly pays off to see tomorrow morning, in the future of the future, or in that almost-arriving time, is the financial carrot—the fulfillment of that obligation to secure money for investors.
And again, this does not mean that there is no real risk of our disappearance. What it means is that wisdom in these matters lies in the timescale, in defining the calendar. For this, it is necessary to humbly recognize a technology that is disruptive, but to do so while steering away from adanism mindset.
youtube
AI Governance
2025-09-19T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzP12EuEPD8aoU6SaZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy6kk9PcrCiVEYTeo54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRFF7XoLvoZdAAzNN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxu4VCdvsbVMxIic5t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzGI2DK-lPdch0EDal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9S1dtJUx3oQU5omV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxhWfSM89SwgcNKmY14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwxTFegdTFawgG9-3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxxkAeake_HbXFK-7d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7uvYD15AP88CwmhR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]