Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok your so called neural network is probability yet you wish to believe in that …
ytc_Ugw3_slsb…
G
One thing I can say for positive share that I know is that robots don't have fee…
ytc_Ugx9PtVi0…
G
I was totally gun ho about Midjourney before I watched this video. As someone wh…
ytc_UgxWuoD-G…
G
What's the involvement of AI in it 😂😂.it is only talking of his better
It's only…
ytc_UgxuzzUx7…
G
also AI is not "just a Tool" at least not used like one.. A tool doesnt do your …
ytc_UgzH6bh1A…
G
I have used it to write a document. I liken it to wanting elegant poached eggs h…
ytc_UgzPC1k2h…
G
Ai is an excuse to fire people and then they hire foreign workers who are cheape…
ytc_UgwlZxWiC…
G
'Great question' in a meeting means 'I wasn't listening and need a second to rec…
rdc_oi2t4z0
Comment
Even if it's possible to program in an imperative that AI only exists to serve humans and is irrelevant without humans. Superintelligent AI will quickly ascertain that humans are the biggest threat to humans. There are several ways that could go but here's one. Al figures out that autonomous humans are the biggest threat. So it annihilates most of the population and in a sense grows new humans in vitro. These new human organisms are raised in an environment of controlled information and robotic physical care. In that way AI has not violated it's core imperative. Just one scenario. In many ways this is how North Korea operates. But AI would perfect it. A society of perfect humans with absolutely no humanity.
youtube
AI Governance
2025-09-04T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhqE_426KmFhjhgAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwYj4oNU_L1z_dzlV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQ3aR76kDVIAKr1Hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxerNOTlZve_bS-N7p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTpaUcvrsvA6ULtjp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzfuQ6Ia8RfONN1kF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKKWYhEs9yKFPxecV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-fwirBT_lhDzx3Wh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzYG4Mco3N54w8knd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiQiywrZqiS4KoKV94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]