Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For those who are interested, this is caused by the system prompt containing som…
ytc_Ugxe5xlyM…
G
Hannah, you've been so influential to my younger education... But please do not …
ytc_Ugy3OvGvP…
G
I've always said that the industry that stands the most to gain by driverless te…
ytc_UgyCNzM2d…
G
They taught us that AI is a robot yes here on ground but i believe
in space ther…
ytc_UgyUSHjaP…
G
The discussion of anthropomorphic raging triggered a flashback to 1968: I was he…
ytc_Ugyl4G-jG…
G
I loved how you summarized my whole point that yeah, AI images can be pretty and…
ytc_UgyUEUsjH…
G
In China, they have been doing this for years, but in a hybrid, where there is s…
ytc_UgylexOU-…
G
As a digital artist, I don’t have a problem with generative AI in principle—my i…
ytc_Ugylbnk7F…
Comment
A lot of people don't believe it's possible, that AI will always be dumb or limited in some ways, or at least, not much smarter than a human.
Some people don't think that there will be any goal seeking behaviour and therefore nothing to worry about (ie they think the instrumental convergence thesis is wrong).
Some people say that if they just live in computers we can turn off the power grid and Internet and we'll be fine. Sounds fun let's do it.
Some people say that the resources of Earth are too limited and nobody will be able to train a superintelligence (ie no algorithmic or architectural advances are possible).
Some people say that it's going to take a very long time so it isn't worth worrying about at this point.
Some people say that there will be lots of superintelligences and they'll have to work together, so naturally we'll be fine. Or that because they're trained by us, they will be guaranteed to love us, even if a bad actor undoes the safety training.
None of them hold up to much scrutiny imo.
youtube
AI Responsibility
2026-04-21T22:4…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugwa0_-Wu5d_T7l6jNp4AaABAg.AVsINn_lJbPAVsoL7IOJTa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwa0_-Wu5d_T7l6jNp4AaABAg.AVsINn_lJbPAVteqqqm1Qe","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwKEzwBsfbejQpOcIR4AaABAg.AVsFsSLFb2RAVsHzRKmQE9","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzh96MmAg0_D4CLu9R4AaABAg.AVsFDW9Mwn1AVurQACr4_W","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxLqe5VQFpN5Jf_nC94AaABAg.AVs6TZbRBd9AVs75JMgMw-","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgxLqe5VQFpN5Jf_nC94AaABAg.AVs6TZbRBd9AVsAsjghPz8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxZoEmu2VIdwc5hy154AaABAg.AVs2IY67CkrAVs4yLe6Si1","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzNCpnbSXT-WjY_iwZ4AaABAg.AVs08yH6QJmAVs5HyA5RGV","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgzvEQxocgEgb-5XcmB4AaABAg.AVs-y98AtlZAVs2W3bio4w","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzEh0FMTWyZPE7edvN4AaABAg.9al0vaPC7l39jyS80ViJv_","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]