Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Rght as he's getting to the point, "Maybe the AI makes its own rules"... What...…
ytc_UgwZqSDS_…
G
It's fascinating to see the capabilities of AI like Sophia, isn't it? If you're …
ytr_UgwWu_MoP…
G
Whenever I talk to ChatGPT, it asks me how I would like to be remembered.…
ytc_UgwUq0y37…
G
To learn AI what are the basics I need to know? I'm a +2 boy of biology stream n…
ytc_UgzR_8rzo…
G
We didn’t need a deepfake to watch Biden shake hands with thin air on April 16, …
ytc_UgznYRpyu…
G
@fen3311 No, the issue isn't when it decides. The issue is when what it's been…
ytr_UgxO1r6O8…
G
Judging from the comment section, it seems too many random people with no idea o…
ytc_UgxfGQ0FY…
G
insurance would probably be cheaper if it was in direct competition with human d…
ytc_UgjWeooRv…
Comment
Imagine it’s 1940, and we hand an AI every scientific paper humanity has produced up to that moment. Fission had just been discovered, the term was barely a year old, and no one yet had proof a chain reaction could actually work. The crucial measurements, the industrial breakthroughs, and the engineering blueprints that made the Manhattan Project possible wouldn’t come until 1941 through 1945. So, what would that AI give you? At best, a really polished literature review on uranium and neutrons. But could it invent the bomb out of that? No. Because the information to do so simply didn’t exist yet.
And that’s the blow: AI can remix what we already know, but it cannot conjure discoveries that haven’t been made.
youtube
AI Governance
2025-09-06T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxlmscQ58XZpJ3X3qN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTzZzrlvspzdxncQZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzXQmq86ckpNbExv554AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgycraD6cejkzMLMCqR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5DHngiRCRwRdMPGx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQ82qHLz5HpXlA8nB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnIUvsBHZP39QZhUJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyKmnwumadPd0dehW94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyyGegSnJVB8FtgEYB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxfP5k-S_ebMI9dqbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]