Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
True Artificial Intelligence rewrites and updates its own code as it adjusts to …
ytc_UgyvtmlQ-…
G
A legal scholar should have found the part in the documentation that explains th…
ytc_UgzwhzYUp…
G
I think he knows it will go badly for mankind..but like always we dont think abo…
ytc_UgzAiyF-6…
G
They did a fully researched study, and it was slated for 3 years from now, alon…
ytc_Ugwy97P98…
G
like how digital musicians will never be real musicians since they are pushing b…
ytr_UgwGbigDK…
G
Generative AI really feels like the media equivalent of the "Gray Goo" doomsday …
ytc_Ugz6IPzu8…
G
You make a great point! Wisdom often comes from the application of knowledge and…
ytr_UgxNGQ1w_…
G
It baffles me to see such, otherwise, brilliant ppl debating this subject on the…
ytc_UgwhJKock…
Comment
A lot of the discussion is about short term risks: bias, harmful content, misleading information. We're missing the most important conversation that we should be having: existential risk -- what happens if Artificial General Intelligence is created, and undergoes improvement to become smarter than humans? Humans are the top species on earth because we can think and plan for the future, invent technology, etc. Tigers have sharper claws, but human expansion has made them almost go extinct. When AGI becomes smarter than humans, how to we ensure that it acts in our interests instead of perusing some goal to the limit, like turning every atom on the universe into computer substrate? Keep in mind, you are made out of atoms. These questions form the field of AI Alignment, and these conversations need to happen more broadly, even in the political sphere.
youtube
AI Governance
2023-05-16T22:4…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz-joppXaxvrKN_qaR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbWdh0GPN6t9DLPUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugze3YtvOPJIRlrmqQd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugzueyln2mYHF4vgZUZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxpYIclchsBHo1Yx6V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwveCX6rpZeuV8PmNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQmeN99YEvjljCTjt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEbeFcN4jvJBB5xdh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYWVcPamogzkuLI2N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwiPOOL7RWci31byqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]