Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It depends on how it plays out imo, if the people are the ones with control and …
rdc_j4x5nvo
G
Istg art is going to be the next nft (as in, because so many people make money s…
ytc_UgzAWXmLh…
G
How is having a robot write for you going to make you a better writer?? Disabled…
ytc_Ugzg7LOt7…
G
Okay, well wouldn't if we if we were being paid to stay home and do nothing take…
ytc_Ugz2_CJvD…
G
The true A.I. Artist is the one who programmed the A.I. to be able to make actua…
ytc_UgzAG3ARA…
G
It is still surprising to many, but LLMs are not programmable. After it's train…
ytr_UgxCKhehT…
G
When AI collapses let's make sure we remember certain AI company CEOs that were …
ytc_Ugztb2r4Y…
G
It sounds like you're curious about the purpose behind this interaction! In the …
ytr_UgycuX6Pf…
Comment
MIT Technology Review devoted an article to it: How existential risk became the biggest meme in AI. The gist: tech companies like to move concerns about AI to the distant future, where their current interests are not at stake. “When we talk about the distant future, when we talk about mythological risks, we reframe the problem as one that exists in a fantasy world and the solutions can only exist in that fantasy world”
youtube
AI Governance
2023-08-10T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwq9O-sL5rxFU8NDox4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgySnXWsURDSJGlGwhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyt71PWBUj03pPmcBp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxfgo9-rNmgt6xvZGF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyw1HFnNzkRuD90YB94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyiEpcZnzt5ncQ41a94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiJdP1d_IK9nC-UIt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwVIDsTtjdAXZClPql4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyxB6k9jThpX1ujgF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyWYzMXvCI8Dm0mIGV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]