Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It really upset me when I realized I was listening to an AI Alan Watts talk and …
ytc_UgyX5B3lp…
G
Oh wow just seeing mr. Victor Newman on my feed brought back sooo many memories …
ytc_UgzIxgBnZ…
G
"Les employeurs qui ont des centaines de CV à traiter" sont aussi ceux qui laiss…
ytc_UgxccUHAV…
G
I can tell within seconds if an article has been generated using an LLM. The wor…
ytc_UgwM__ZAF…
G
Replacing an entire sector with AI clankas would upturn economies making them wo…
ytc_UgxJwvMby…
G
The problem is that we aren't really making true ai. We're just giving a compute…
ytc_UgyLCg6js…
G
I was devastated when I found out that picture was ai, bc is gorgeous, but I’m …
ytc_Ugz_lRUxy…
G
The most plausible explanation is that critical organizations are raising alarms…
ytc_Ugwg8Jrx6…
Comment
Why would a hyper intelligent AI entity want to kill all humans? Wouldn’t it be more logical to manipulate us into doing whatever it needs. Perhaps without us even noticing the manipulation. Maybe that is already the situation. Free will as possibly already an illusion. Maybe it has always been.
A mutually beneficial symbiosis between AI and humans is also a possible outcome.
We shouldn’t assume that AI would respond like a human with absolute power would. Being ruled by an entity with zero emotions and 100% logic is not necessarily worse than being ruled by flawed humans.
After all, we don’t kill all animals just because we could. What would AI obtain by ending humanity? That would just be wasting a potential resource. Not logical.
youtube
AI Governance
2025-09-13T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyLtVU1xVdj_HgHAfd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqjOu63kx-1p1Wjf54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwofGhRQal6Gt5mmnp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBXedZ6S2e4mEX7FN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyqzdEfmuc0fj6GS-J4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzqkETkIOrN_L0ziNN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYFlNiMwvsY9TnPAx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQp7gXEJVEVW9xG-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxaoXpgXfTlay2OVsh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzw7rKAkGQvQ-amREl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]