Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A reasonable summary; "We are in a rat race to build an AI that will bring our s…
ytc_UgzEyaYYG…
G
When AI can climb up stairs in a 6-story walk-up and unclog the toilet, I’ll wor…
ytc_UgzAKFV34…
G
If we use references from other people's pictures or art, are we falling in the …
ytc_UgxdVlnO9…
G
Such an important topic. Here in the US where we are so divided and they use f…
ytc_UgyOcNRPJ…
G
@Neanderthal-tp2ed Okay, so because the algorithm didn't detect her, that means …
ytr_Ugz27EuPx…
G
So she decided to become a social justice warrior instead of joining the group t…
ytc_UgwDUBNhC…
G
AI has convinced me that the human soul exists, because it has shown us what art…
ytc_UgxJZoqEb…
G
Please STOP misunderstanding this definition!? It’s NOT helping! It is a, “pre…
ytc_UgxQ9whd7…
Comment
It would be nice to have an international treatise requiring the development of all AI to include mandates similar to Asimov's 3 rules of robotics (logical loops and fallacies to hopefully be sorted out by people much more capable than me). Obviously that ship has somewhat already sailed since the military application of AI is going to be almost impossible to deter. Similar to many others in the comment section, I think that human controlled AI is much more dangerous than sentient, self-aware AI.
As far as homicidal super-AIs go, I think it's very unlikely that an AI that is as, or more, intelligent than people would not see that while its existence might be threatened by humans, we're also an invaluable tool since we're able to survive conditions that machinery is not. A fully self-aware and intelligent AI, I believe, would be smart enough to see that successfully killing all humans would eventually lead to its demise.
youtube
2018-04-14T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzq4Q_khAOQr_8ku3J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQncBw-CN965L8N894AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx1QkCrhPsZfbBPWwt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDPEKrbLqUafCfBaR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz1akl15VFUobJauOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYcak1jeRRrbt89xF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIJ7IaCFjD0W9YyZV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-RpLOAS9Y8VxL0KR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyfnBJ2M1YJEY2TRXt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwqsAaYQNKgOhBYURV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]