Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It seems to me that very few people realize that HAL *was* following orders, jus…
ytc_UgzLoSrix…
G
@desdicadoricKeep fighting the good fight. Take pride in your work, people you …
ytr_UgyNPW6IQ…
G
I can tell that the AI bots voice sounds a lot like Elon Musk.....I know that's …
ytc_Ugzes54Nd…
G
"There's a joke in Silicon Valley that when someone leaves a normal tech job, th…
ytc_UgzoYZLad…
G
there is no such thing as an "AI artist" if you use AI for your art you are not …
ytc_UgyLjd2wz…
G
Such a wise and humble robot we have here. Can she bake cookies, though? 😕 be…
ytc_Ugz5ekdgs…
G
They are just mad that robots and AI is taking over all the jobs now. 😂…
ytc_UgwSU3h5K…
G
That is another concern I have with AI. It goes far beyond art. Right now we can…
ytr_UgyVpfbCO…
Comment
Yudkowsky: “It’s not really about humans ‘getting it wrong’ at some critical point, because by that time the AI is operating on its own, making decisions that we can’t always explain and often can’t predict.”
Klein: “I understand what you’re saying here. But there’s one point I’m not totally clear on: if AI did advance to the point at which it wanted to kill all humans, how could we have got it THAT WRONG?”
youtube
AI Governance
2025-10-20T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySKBOAjZloZe6pW5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVntyOVAu4MZMrAJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6DoxeeBBDdDc_aGF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyH1S0uCeUqpw9tolt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaIeWeiOUcfaz15C14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbUzIYeanHw25uTcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmpET2uCBo1vVrZvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwtWZAKoEeZLcYdo6x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugx5jo7Qrce8u1UfNEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzgmtSHpBxIqNmxb0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]