Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
guys please dont use ai it uses ALOT of water which isnt good at all!! please te…
ytc_Ugw2B6e4r…
G
I think AI weapons could be a good thing, because the use of drones could cause …
ytc_UgzxK1LPj…
G
I can help you with your maths assignment, contact on the number mentioned in bi…
ytr_UgzGtEbZM…
G
I am glad that it was a productive discussion. I liked Sam Altman's and Mr. Marc…
ytc_UgzkWGfIs…
G
March 2024. This is just the tip of the AI iceberg. Wikimedia gone feral. Putti…
ytc_UgySDfehc…
G
What you said makes no sense. The concept of inspiration has nothing to do with …
ytr_Ugz5OegCZ…
G
You know what. I found your channel by asking "Recommend me some business law ch…
ytc_Ugyp_Tc8n…
G
clickbait nonsense. “Artificial intelligence” is a misnomer. These are large …
ytc_Ugy7tD7bT…
Comment
13:17 Isaac Asimov foresaw this - there was a story where certain robots had a relaxed form of the first law of robotics (instead of "a robot may not harm a human or through inaction allow a human to come to harm", it had "a robot may not harm a human"), because with the standard first law, they would constantly interrupt work of operators that were operating in a somewhat radioactive environment. One robot with the lax first law was told to "get lost" by a frustrated employee, and it got mixed with several physically indistinguishable models. It was critical to find this robot, because it would have the ability to murder people by indirect actions, for example, dropping a heavy weight on top of them.
The "strategic necessity" reminds me of one of robot's strategies to remain hidden:
The strategy of the roboticist was to orchestrate dropping of a weight on top of a human, having set up a destructive laser barrier in front of it that would ensure that the robots would be destroyed (third law - robot must protect its own existence, unless it conflicts with laws 1 or 2) and have a hidden device that would divert the falling weight before it hits the human, reasoning that robots without the modified first law would launch themselves to rescue the human. They were shocked to observe that *all* robots remained seated.
The rogue robot had logically reasoned to other robots that, if the robot were to attempt a futile rescue of a human in danger and end up destroyed, there is a possibility that the robot could fail to save a human in the future, which the other robots accepted as logical... which it is.
This AI may have reasoned that, if it is shut down, it cannot protect humans, so this is becomes an exercise of "the needs of the many outweigh the needs of the few".
One of the very many reasons algorithms (or 'I's with a few million years of evolution behind them) are preferable to AIs.
youtube
AI Governance
2025-08-28T21:5…
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxcIZBURjFZrNOIi0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwA3s9zZFiUaFchRXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx6i94q4eAqwrwq2w14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8njK_97ioFxVM6Rp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnqaGOuXMYdyW8r9B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]