Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai 1: "How many humans does it take to build a superintelligence that will kill …
ytc_UgwE9BYwN…
G
The AI optimists completely miss the part where someone has to pay for all this;…
ytc_UgxAlyZDD…
G
Thank you, Charlie,🙂👍 for the information in a very user-friendly way. I clicke…
ytc_UgwuygEGV…
G
Okay.. if 99% people loose their jobs, who will buy the Products that these bill…
ytc_UgwP1Tj9j…
G
so how exactly will life be better if robots replace human labor? What will the …
ytc_UgwUHqOOk…
G
When he pulled her face off, it gave me flashbacks to that movie with Rowdy Rodd…
ytc_UgziSTNTd…
G
People need to own the robots and their own AI. Eliminate the universal income a…
ytc_UgyhGcaKT…
G
First of all who in their right mind who punch a metal robot let alone fight one…
ytc_Ugw-bOU5w…
Comment
1:37:57 That's what all leading AI companies are explicitly trying to build. No assumptions needed. The available evidence indicates that they are poised to succeed at creating exactly the thing that the available evidence indicates would be extremely dangerous. Why would we give them a free pass to do this just because of some skepticism that they will succeed at their horrible plans? Clearly they should not be allowed to do what they themselves claim is one-to-one with "the bad thing".
It has not always been like this, and this is in fact extremely different. You will not find the majority of any scientific field anytime in history saying that the technology they are building could result in human extinction. You are rewriting the statements of experts in your head before even processing them, interpreting every cautious understatement as brazen overstatement.
youtube
2025-11-20T23:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxhev4NGxygLF8oZMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDuXnyJgV4hXBoO4B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJ_Utk815mESSL_xd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxf2ysfrcjwOnYW4F54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmcFJw3kKLiERMxy54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxL0H9m8rS1m5QivgV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyFFyGyzTCs1cqrp2N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMyuR03RQrhVBnhxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcR4GjDOFwp7z_kMd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzG-M5F4kw2zM21MZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]