Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like the concept of charicter ai, i mean, getting to import yourself in your f…
ytc_UgzC5yt_L…
G
This is so stupid you can't automate plumbing, it's absolutely not possible now …
ytc_UgyQaoCEt…
G
These claims are way overblown. A ceiling has already been reached with generati…
ytc_Ugxy2y3DO…
G
See how clever they are?
"Hey it could save YOUR kid from suicide, why WOULDN'T…
ytc_UgzTFjdpH…
G
I completed a research paper that I was genuinely excited to get feedback on as …
ytc_UgzkOHHO5…
G
Mm-hmmm… You do know you can write instructions on what kind of personality Chat…
ytc_UgzxN83th…
G
Every tech can be used in a good and bad way. I am pro AI and advancement all th…
ytc_UgwJP_mXW…
G
Pretty sure there's evidence that those who spend significant time with LLMs, ha…
ytr_Ugzf1xhyY…
Comment
We have to have a global summit to decide as a species how to proceed with technologies like AI and CRISPR. We have to create a Future Risk Commission. One whose mandate is identifying potential extinction-level issues before they arise, bring them to the world's attention, and then we all decide how to proceed. It's also extremely obvious at this point that with 99% unemployment, we're going to need a Universal Basic Income.
youtube
AI Governance
2026-02-11T00:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxLkX4bcaVgXN_LzXF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPROgPmUZKoHSvng54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBbQm7ismj-SMpieR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJjxpAcLplq60E_4l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyChuK8VP7pilKrQIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgypyTtPsw53JhLHZEF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2BHvQ9yrUcnAE42F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwa2EQg3yLymz30vAt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-nycir-KNfgJNMPl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyeYfKZQPnJLQTajx14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]