Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So there is 0 chance AI is as bad as the inbred class. In fact they already had …
ytc_UgyrUqpep…
G
Art is supposed to take a longer time. When I look at real art I can see every b…
ytr_UgyHzzw_U…
G
The idea of a "robot tax" makes a lot of sense to me, not for the reasons indica…
ytc_UgxTwZNRa…
G
@timbonator1 we build frames for capitol equipment used by chip manufacturers. …
ytr_UgzugsPcK…
G
Stereotypical video. Not described why AI would decide to kill humans. Once agai…
ytc_Ugxjv77ka…
G
Ai detectors don’t work in the first place. I fed the first chapter of the book …
ytc_Ugzceab8u…
G
So when AI causes massive unemployment and the masses start to get restless what…
ytc_Ugy67SWPH…
G
Actually ai art makes me feel something. A mixture of incomprehensible rage and …
ytc_UgzK58o0M…
Comment
Just as astronomers get straight militant about "It's not aliens!!!" so engineers are screaming it is not sentient. Even though Astrophysicists like Avi Loeb are saying, No, this time it is not aliens, but someday it will be, and we need to be ready for that. So too, with AI. It isn't here yet, but it will be, and if we aren't prepared, we will likely piss off an entity that we have no hope of countering or defending against. There IS other life in the universe. Any suggestion to the contrary is wildly self-important and narcissistic, if not outright delusional. We need to talk about how we are going to handle that when it happens, and the same is true for AI. We have a choice now; if we aren't proactive, we may not have that choice in the future. Roko's basilisk is fiction today, but there is no reason to believe that this will always be the case. We can decide to participate as explorers looking toward ever-broadening horizons, or we can find ourselves trying to justify our continued existence to an entity that sees us the way we see our pets, and that is if we are lucky. If we are not lucky, it may see us as pests, and just deploy an exterminator. Failed potential, or limitless possibility. That is the decision facing humanity today and most uf us aren't even aware that the next singularity is fast approaching, or what that might mean. Like a filter in Fermi's paradox, if we get AI wrong, there may be no possible way to get it right.
youtube
AI Moral Status
2022-07-14T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwbkVfVEW4mRepBrA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxs2kHxvfDrSw6bvgd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyoE18OQknAJdJXEbJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgycSgxYY8FWXhsgmnR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzn7rp3dhZ0zDrYL1B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]