Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ilya S. Super Safe Intelligence SSI please add to the into too. Lets gather the …
ytc_UgyJTjwXS…
G
I'm a professional artist and I'm not worried about ai art at all as at worst it…
ytc_UgzMlIwDh…
G
you know it's all gone too far when you land on a website and a robot is used to…
ytc_Ugwa-BrHz…
G
Any potential world takeover, we did to ourselves. We should never wash our hand…
ytr_UgxhsuNqT…
G
I wonder why we should be even a little surprised that AI is developing most of …
ytc_UgyyLR_VK…
G
If its not helping humanity then why invent it.....get ready the world is about …
ytc_UgzFwr8BG…
G
I think the art world should develop a profession: an art analyst. This person w…
ytc_UgyCAxcQq…
G
AI tools really should be used to assist your digital drawing, such as the blur …
ytc_UgwbEIS_c…
Comment
If I correlate AI with a new drug coming onto the market from pharmaceutical companies, would we allow a new drug that had a 25% chance of killing you to move to human use? Let alone the entire planet? The threshold and burden of proof that pharmaceutical companies must meet are staggeringly high, yet they are only targeting a small percentage of the population who are usually already sick. Yet we allow AI companies to make decisions that can have extinction-level events? Why are we talking percentages at all? Even a low rate is far too high! Would you risk your child's life at 5%? 3%? 1% even? Never. Completely insane.
youtube
AI Governance
2025-12-07T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyXIBhpE8wmC4fLwKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1490-ct_C7Ltxrj14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwm46ABaIvNifCjlJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxSt4hozjmrk8oE_d14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyeyZ03SsJpd_M4LHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7qG8u5Su_n62XsVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyfEdGota1cSAtmVId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyl0pSWKViR6Vpv5Ox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyyXCaIzUJl3yvLjON4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzq07uKVQcQCaFxLFR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]