Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is more frightening... the government agencies and corporations behind the …
ytc_UgwsZSC9e…
G
I love Shad for his historical knowledge and technical skill in HEMA, but I've b…
ytc_UgyE8LoBG…
G
Ai is super dangerous continues to build a huge ai supercomputer and chip people…
ytc_Ugz6RMhvc…
G
as someone who is a bit of an artist myself, i am also a cynical realist and thi…
ytr_UgyeVRCBc…
G
Thank you for having Roman explain the risks of AI in such a calm and good-faith…
ytc_UgyzDNSJ6…
G
Watermelon, fried chicken, and monkeys? AI can’t even come up with new racist tr…
ytc_UgxfCS5Xj…
G
I think the comparison to early personal computers makes sense on the surface.
…
rdc_oi30ti6
G
i learnt me this here lesson when i was younger than a waymo is tall... try me.…
ytc_UgwPjgWOe…
Comment
Assuming progress continues, AI will become much more capable than humans in an increasing number of domains. To make use of this potential, we will need to give these systems resources.
>There are lots of geniuses in the world buddy. Being smart doesn't make you more capable of taking over the world.
Intelligence in this context means capability. Something more capable than a human in every domain would obviously be more capable of taking over the world.
>There's no way to know that the president of the United States isn't a crazy person who will launch the nukes because he's angry someone called him an orange blob either. Which is why we have safeguards against that.
We don't have many safeguards around AI, and there's clearly a financial incentive to ignore safety in order to be the first to capitalise on the potential AI offers.
reddit
AI Governance
1708176946.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_kqt5ru8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kqu2y8v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kqtb3wm","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kqt78dn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kqtbky6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})