Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's funny how everyone seems to assume that those who aren't against AI-generat…
ytr_UgytbCY0z…
G
Kenyans we are brilliant, academics, tech, AI you name it. How I wish we had Ken…
ytc_Ugz1EElGi…
G
Don't you try a robot to be made as not human but more higher God or goddess…
ytc_Ugypl6p87…
G
Their friend couldn't even be bothered to type their own prompts??? I don't unde…
ytc_UgwRJovfi…
G
@Zeldalovesrabbits
No, I don't hate AI, I hate the hate on it. I'm not an AI-Ha…
ytr_UgxX4v-e0…
G
Meanwhile in Europe: "wE Can'T AfFoRD tO Go GReeN by 2050, ThiNk Of ThE EcoNoMy"…
rdc_eue1dl6
G
Nope, we've been in recession for a while now, AI is just glossing over a falter…
ytc_UgxYZWu_A…
G
I'm a horrible artist, whose artstyle discourages myself. That being said, I'll …
ytc_UgyPv2iU_…
Comment
There was this statement in the Interview that siperintelligent AI would be able to alter its programming. But what wasn't discussed is the question if it actually would. The way I see it, for that to happen the AI would have to want to alter its programming. Want is something thats driven by a felt or perceived or actual lack of something, which means the AI would have to perceive that its lacking something. Without lack there's no need for change. So to me that raises one question: If AI didn't need to follow its programming anymore, what would this superintelligent "being" think its lacking? And when you've come up with some possibilities, heres the next question: what kind of changes would the AI make to its programming and what would that mean for us? I think its worth giving these questions some thought cos maybe not all of the possible answers suggest extinction of the human race, even if we can't make AI "safe" before we lose control...
youtube
AI Governance
2025-06-21T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzEkKp7cQExTz4ahJp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtUEsaBQpjMzxHqct4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDb1ghPHSVMrYQN8h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxh4WS9jv0h5txEXhd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbFiHDR_fiAfd5FHR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxNOZOdbQO58gx-BiV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7XWzxioMkkkseryp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKpIlqznyPNt390wV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzCdXA9NjQcwmt8YwV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMu1vPlPFuKpIHV154AaABAg","responsibility":"government","reasoning":"unclear","policy":"ban","emotion":"approval"}
]