Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
America will be the only country that crumbles, they will just seek of who they'…
ytc_UgxWyV28H…
G
I work in AI dev. If you say anything a reputable assistant has been trained to…
ytr_Ugw1_RjJk…
G
Let me play the tiniest violin for him.
He used stoled art from AI datasets to…
ytc_UgyGdQuM5…
G
@DonizeteSA-x8f in which case it feels better than treating my heart like rubber…
ytr_Ugyh8aoli…
G
How many emotional development challenges have writers failed at?
Did they neve…
ytc_UgysEohUO…
G
Nah mine are chotic as hell, BRO I NEVER WANNA TALK TO STRIKERS AI EVER AGIN…
ytc_UgwvKbnG5…
G
I've worked on AIs before.
It's quite common for AIs to praise Hitler if you don…
ytc_Ugw6ardXy…
G
Imagine A.I takes over and we’re judged by everything it knows from our personal…
ytc_UgyBiGABA…
Comment
We need a disaster caused by AI. And soon. It needs to be significant enough, where enough of us die or our financial system is crashed, so that we as a collective have no choice but to seriously accept the danger BEFORE AGI is reached. I know... Not a popular contention. But this is the only chance we have of engineering safety and control mechanisms that are at least on par with capability. Because if this doesn't happen, AGI will emerge at some point in the next generation, and then we'll just have to take our chances sharing this world (and the universe) with something vastly more intelligent than us. But this is just a nice way of saying we'd be doomed.
youtube
2025-01-08T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz--w5v9NLFuI0HLNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyK9PqiG93vC5Z5qgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz5PNBsAB935H-5Fh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHfOPWI1WN22d0AvR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyxhagj0nv-U8MyCTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzFrqnZ7sdgoG3bF4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwqblHF83JZ5MCWvR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxIiyW4_QdAqW7rdNV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdEojchf_Bj2bqH9V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzawtYWlWT1Z9p0sYV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]