Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
we are misunderstanding & cleaning the real question or issue here
Why have we …
ytc_UgwKbqjnK…
G
We should he championing the advent of AI for exactly this reason - that it will…
ytc_UgwGqZaP9…
G
I would like to make a point that there are no AI art and there are no AI artist…
ytc_UgzM_HYZq…
G
There's a certain degree of irony here. I'm leading a tabletop game for some fr…
ytc_UgyoDFdYR…
G
@scsft I use Copilot already in my job, as I said for generating scaffolding a…
ytr_UgyC9wDTS…
G
Only thing ai should use that wont directly benefit us is maybe ai chats and eve…
ytc_Ugy-PWLVQ…
G
This is the best take on the dangers of AI. It has absorbed the worst of human t…
ytc_UgzyWxl7B…
G
I DONT CARE ABOUT STATISTICS, FROM A HUMAN EMOTIONAL STANDPOINT THIS WONT WORK. …
ytc_Ugw-SUhpa…
Comment
Right now humanity is getting tricked by some kind of monster inside us: like get more money, cure cancer, solve all of your problems that you ever hoped for, for that you just need to create super intelligent AGI, don't think about some narrow tools, think globally - yes, very nice, you already have 1 trillion dollars of investment, right choice, good boy.
And then in let's say 2029, yeah, we are solving more and more problems, your abstract dad's cancer is cured, living in a flat is super cheap, food is also just super cheap and affordable even in Burundi. But then in 2030 we all will fall down simultaneously and just powered off by AI and soon be dead. Humans never understood this concept, and thought about momentum gain not about future insanely huge risks. I think already now in 2026 we may conclude that we are all cooked in 3, 5, 10, 15, 20 years, it is only the matter of time, but the result is determined
youtube
2026-04-20T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwizWR8wv0sepAnD9F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDWz62-jtKvy7RHvN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgztUWEyxtobcbLHdsx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpEO0ZmuEFec9xMLp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzLWh-dF3qDNWE2fUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzqK8zsu0tzVWt5AmR4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy_3FpDfOJpz1aE__R4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwcHiw_G7pvQI5oKWR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz9eV1ABsuay_EcCEZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyx40gh3-PeYTlSR2V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]