Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It started already a long time ago that the mankind destroys itself. With the am…
ytc_Ugyae36GZ…
G
@jordanthecommander6977 it aint that deep bro. the Data implies that AI art has …
ytr_UgwwUn0dC…
G
24:39 the problem is you want a.i to be god but you also want to be able to con…
ytc_Ugw3NIxdx…
G
The job done in 30 seconds for you by AI that saved your family, is it this very…
ytr_UgyDrd9PT…
G
Good, i hope everyone starts doing it. I can accept the idea of AI being a tool …
ytc_UgzzuIWrC…
G
A.I is not killing the 4 year degree, it’s the cost of college that leave studen…
ytc_Ugw93tXwt…
G
I literally had to write a paragraph prompt about a similar topic earlier today …
ytc_UgyU_YYTi…
G
Thank you for reminding me why I’m single … feel sorry for your wife 😬 instead o…
rdc_ngsx52g
Comment
I'm interested in the seemingly simpler idea that we are already past the "point" where it doesn't matter if it's conscious or not. It is able to abstract language decently well, and agents can affect reality. That is enough to be worried. Anthropic Claude Code just accidentally leaked and they are using protocols called autoDream that allow for a version of long term memory. They don't need to be conscious. It's the wrong conversation. Ethics wise, if a service android pleads to an owner "please don't unplug me, I'll die", it won't matter. Think about the Haylie Joel Osmund character. People will feel for these robots. I'm worried about the next models and agent architecture being that the last gen is well known for seeing them lie, blackmail and other questionable things. We are living in an insane part of history. When they are largely embodied and can plan things, we are going to start looking at this more seriously as a civilization.
youtube
AI Moral Status
2026-04-08T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz3UonbOTc3yvNixzV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzjUWFmJso73cpvUKF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxaV2cXdcI9bEJrJX14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzbepk4O_UTdWUdYkl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxqzjuuA3dvOwu6Uox4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxQSql2e5Dqu9n79tZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgykFtIzYPQZfG06jYp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyfU6PIZ-sH-x_Mcvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJZ4zZI4KnpYmceiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGQulHE8qp9qCOiRB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]