Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Two Girls Taking Selfie- I guessed AI. The faces are illuminated differently des…
ytc_UgxJH2IrN…
G
Does it matter if AI takes over the world if China has already done it first usi…
ytc_UgxCkxXD_…
G
OpenAI is doomed to fail, Xai and GoogleAI wpuld still be relevant due to their …
ytr_Ugw82rbop…
G
Thanks for the context. So the hiring job was outsourced and the guy who posted …
ytr_UgyuVHAoB…
G
Did you see that one robot that was eyeing up the camera that was definitely hum…
ytc_Ugybxt2mm…
G
Big wigs at companies hire McKinsey to recommend changes like laying off people.…
rdc_n7tehg3
G
Ai robotics will lead to the end of sweat shop jobs in Indonesia and elsewhere. …
ytc_UgxzzIjgP…
G
Unfortunately only enough, not most, need to choose AI. And so far they have! Th…
ytr_Ugxhp-Dhw…
Comment
It was interesting to see some of the view points and concerns about the AI in this interview but there are so many assumptions made that make all the valid points useless to discuss. Biggest one is the assumption that human beings are able to control everything or that we are always in control. Sure we can control certain things to a degree but there are many things that we absolutely cannot control. For example, if some comet traveling billions of years from interstellar space that would wipe out all the living things on earth were coming tomorrow, there’s nothing that we can do to avoid our extinction. In such case, we were destined to go extinct even before the first humans came to be on earth. Rise of super intelligent AI is inevitable on our current timeline and human trajectory. Maybe the timing of rise of AI could have been delayed or accelerated but from where we already are, it is already inevitable. Sure we can ask all the AI developers to stop before the super intelligent AI is created but no body will stop. It’s like Nuclear weapons development during WWII. Nazi Germany, Great Britain, US and Soviet Union were all racing to develop such weapon first. They all knew the dangers of developing such weapon and technology but did they stop? They all assumed that it was safer to develop such WMD so that they can control it. It’s exactly the same with AI race. It can also be weaponized. AI developers in the US know that someone else will develop it if they don’t. For example, if OpenAI gets shut down, DeepSeek in China will dominate. If all the countries miraculously come together and decide not to develop super intelligent AI because of its dangers, someone in their basement will illegally develop it eventually. So we are worrying about eventuality that we do not have control over. However, unknown is what super intelligent AI will actually do. Sure it might wipe us out but it also might not. Just like the faulty assumption that human beings are in control or we always have to be in control, interview is assuming that Super Intelligent AI will think like us. In other words, super intelligent AI might not want to control or wipe us out for whatever super intelligent being’s reasoning. We actually will not know until it happens. There’s no point of worrying about something that we do not have control over or something that we cannot predict.
youtube
AI Governance
2025-09-21T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpwfVEWoYvEQ9HgxV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmamMRxftHJ0lHjpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugww5GJu_h33Nd5fPfV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9YSOKsmvRyvaWGdF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwSiKa7wdgPbWuPxpx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyr-eyPsngx_X1TRhZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw6Iw5dP3CHFJ9M0Lp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxV1sff2Vr2cpIcxRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbBfodqbsyEYnu_pp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy1SXFknvVKwtj9ELV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]