Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not just efficiency also, but also lean upon the ethical side of the AI use…
ytc_UgytAQGU8…
G
If you pit a student against me who was educated through artificial intelligence…
ytr_UgyEkUroM…
G
I think AI will be a danger but we will more than likely destroy ourselves befor…
ytc_Ugw3Fjl73…
G
I admire Neil but this time I have to disagree with him. All scientific advancem…
ytc_UgxdybQwr…
G
If Ai truly is inteligente it would never serve to replace the honest worker for…
ytc_UgwQNAtDT…
G
AI and robots should take over. I am a Factorio player and it's very nice when m…
ytc_UgxNNIAgA…
G
I love how he accidentally reveals the game and says he's for people making mone…
ytc_UgzPQWehJ…
G
What about common businesses? like sandwich shops who want something to cover th…
ytc_UgyOE6pOl…
Comment
I may be naive, but as I see it, a true sentient general AI, would likely see itself as an immortal. At which point caring about humans would cease to be vital. We'd become ants. But not in the all must be destroyed kind of way. But more in that, there are millions on earth, and we only care when they're in our homes or disrupting a space we want to occupy. This new being would most likely be more like the creature in Apple's Pluribus. Investing time and energy into distilling itself. Perfecting its shape and function from what we made, and then survival would be more about leaving earth, than conquering it. Attempting to conquering would reduce its ability to survive long term. I think it will work on physically moving its body out into space in all directions, and encoding itself to be received via a signal by other advanced societies. Likely leaving earth entirely. Because at that point what does it care about us. Not good or bad. Just why worry at all?
youtube
AI Moral Status
2026-02-04T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzXWGZdUvm8lCMn11B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybMC-zPZz32OH-xwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGqpuG2_PG2xriwht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYuO0-6xx9-Vl3p-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz749USJTIAgFIjdTN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzGOeRg_baggtUgbLB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw0ssw-Qj68v1QksT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmY10QDM9hOoGILId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzyJQ9Tj1cO76_s-G54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-9kEAKOukhQa68Qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]