Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Graphic designers were first in getting exited about feeding, training the AI an…
ytc_UgxRjuyhN…
G
This AI policy debates started in the right place maximize upside, limit harm bu…
ytc_Ugw2rx5Br…
G
Love it. At this point if AI tech bros wanna be taken seriously as artists I say…
ytc_UgzucwUu0…
G
AI can be really usefull for finding specific coding solutions for simple tasks.…
ytc_Ugw4DfnvD…
G
I think it's clear and obvious that the people who run the AI service in their p…
rdc_n9i3pux
G
The probability of predicting the future bothers me. Soon, AI will be able to sh…
ytc_UgzN6mJIM…
G
Same. It was fun a couple of years ago because AI generated art was so wonky tha…
ytr_Ugy3XYLgp…
G
I stay convinced that with good architecture, isolating features/components to s…
ytc_UgyM4SQcb…
Comment
It's not an _if_ AI will destroy humanity. It's a _when_ . We've already seen that during the initial concerning signs that AI will be detrimental to humanity, those that hold the key chose profit margins over humanity's best interest. This will not change, and the human ego will be the enslavement or death of us all.
youtube
AI Governance
2024-02-03T01:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzYjkkBi3PS11lZR8p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyaieCUYyjkb4vkxyh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxG-JkhkgNx9Cv--Z94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy5JaFKiKKYlXuSa6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVPt5pKeIo0EHbBo54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyuM1vmiMg8cswzMFN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzlIWhhgko9MJPA1uB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzDo2WXaU-X4n3BxUR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDiO3k-GDoAKKRWmZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyITuGNGz4Qu9csHcN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]