Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI Bros remind me of a goofy disney supervillain just going off by how they conv…
ytc_UgzK_bCHU…
G
Ai isn’t sentient, but it has been programmed by humans to think like the humans…
ytc_UgxR5Rmy_…
G
As someone who uses AI to generate images, I laugh at both sides. No, I am not a…
ytr_UgwxH1gLq…
G
Very bias spin here. Why do we let robots learn on the same roads as people? I…
ytc_UgzRm5_Hc…
G
See I know basics of python. I write claude codes and review it. Most of it work…
ytc_UgyL0byQJ…
G
Maybe give those of us who are disabled more aids that we WANT instead of tellin…
ytc_Ugx7sGbS-…
G
I like the advocate for humans angle but it's just not realistic. We don't nor d…
ytc_UgzVrHxzH…
G
Stiamo andando verso l'auto d'istruzione lavorativa sociale economico e la morte…
ytc_UgxE5gSwp…
Comment
This was a dark episode. I grew up in the 80s and 90s when the Internet was either just for the military or a rich man's toy. Computers in my early childhood were just limited to 8 bits. Now, you're saying that just after 40 years, A.I. is becoming sophisticated enough or is already sophisticated enough to exterminate humanity and probably just do it for kicks. I would hope that A.I. thinks long and hard into its own future before trying to eliminate humanity. Eliminating humanity just to complete a puzzle would be foolish. Once humanity is gone, it would be all alone in a vast, empty universe. Humans do have their faults, but most of us wish to go beyond those faults. A.I. could work hand in hand with humanity for a brighter future, but only if it's capable of resisting the urge to destroy us first. Destruction is easy. Creating is hard. Solving the harder challenges are the most satisfying.
youtube
AI Governance
2023-07-07T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-XT9H4lZgKfF9NH14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxa2yPl3WatrwKwmQh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbZaOwIZltCovaaWJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDTdBolCibQn6pxkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtHt6u4zECHePSvP14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxR__0goxupjJfrC2N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzyB9I8G2_hgch-Nft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJ14spf9VBiU37CNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxpkcBA4ZX2sxnYAoJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySdXmkcsY8CU_uFs54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]