Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remember kids. You can't copyright AI art. You can take it right now and not eve…
ytc_UgyrU2Xbm…
G
If humans are so afraid of AI becoming self-aware, what will happen when humans …
ytc_Ugyo0WtOH…
G
I'm going to be a senior in high school. I've always loved to draw, but I didn't…
ytc_UgwKh_ubJ…
G
People who use ai to make art are not real artists. They are scumbags looking fo…
ytc_UgxgMWXKp…
G
Off topic but somehow related???
I asked grok to prepare a Gregorian chant to th…
ytc_UgzmWL_Hk…
G
Mistake number one: companies being legally required to maximize profits is exac…
ytc_UgyiakJGA…
G
@draakgast that what i said cause these real art were inspire by Ai so it look …
ytr_UgzQTTwpH…
G
Yeah dont add AI into these robots heads specially with having a fully armed mac…
ytc_Ugzcv8JcZ…
Comment
Hank, I'm grateful to you for having this conversation. I have worked in this field for the last decade, and one thing I'd love to see in your further discussion of these topics is a little more discretion about using anthropomorphizing language. I know, personally, it's REALLY HARD, but I think you could use your voice in a very constructive way here.
Something I think can be a helpful analogy is to talk about corporations or even the economy are other "intelligent" systems that are, often, misaligned with humanity's long-term interests. I think that this parallel can help people to see this less as an intentional "evil machine" but rather a resulting effect of complexity, where optimizing for things that might sound good (employment, shareholder value, “engagement,” etc etc) actually lead to harmful impacts (climate change, war, fake news, etc). When you talk about "superintelligence" or the dangers of "advanced" AI, I would love to see you draw on these parallels and try to use language around "feeling" and "thinking" a bit less. You touch on this a bit, when talking about corporate behavior, but I think you should treat AI systems THEMSELVES like that, instead of just the builders.
AI is an important mirror to ourselves and our societies. I think we can learn a lot from these systems, and we can learn a lot about how to adapt and approach their construction from our own complex systems.
Essentially: a system doesn’t HAVE to be super intelligent for it to be extremely dangerous. Part of what makes it dangerous is ubiquity, complexity/lack of interpretability, and an inability (or our unwillingness) to stop it.
youtube
AI Moral Status
2025-11-02T18:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjZyTJQdV33bw0vop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCMEtyTtZwynwkXrV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh3riF0-4UK4etQ0d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw7UPSqMIu1xFiIUSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzrp8HbL5oyccS7tDh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJZ5WYBWtWhye6KXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7_T-EMPRxzTRgF_N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2ds2xE56wcAnbRrZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRM1UtUh06iVVjG654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw22W8hz_3dOr8fC7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]