Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually every fucking scientist says this. Every earth science book says this. …
rdc_d2ynamv
G
The end of the world is near. We already have robot soldiers and it seems the cr…
ytc_UgyGKc7s3…
G
AI already draws really shitty hands. You don’t need to poison it to do that.
…
ytr_Ugz0P02wk…
G
I don't get the point.. why would you put AI generated articles on a webpage, wh…
ytc_UgygUb83U…
G
Ai can’t become conscious in a human sense EVER, it can however form bias from b…
ytc_Ugw02ETPf…
G
None of this matters. Human developed Ai has been far far exceeded by Alien Ai,…
ytc_UgzBIW-Mi…
G
Если бы можно было так же со старых женщин снять старое лицо и поставить молодое…
ytc_UgywNgpbn…
G
"it might be revolutionary" 😭🙏🏻while showing us the ai slop thats no where half …
ytc_Ugy2BM65i…
Comment
One more thing. Remember when nuclear technology was going to be the answer to humanities problems? I do I grew up in that era and have seen the outcomes. large areas of land made uninhabitable, huge quantities of polluting materials that cannot be safely cloistered for the thousands of years they will be dangerous, the failure of nuclear explosives, but the ongoing threat of nuclear weapons, the replacement of nuclear radiotherapy based on isotopes. Any new tech has to be seen as both a gift and a potential curse. In our current hysteria to build machines we do not understand fully we may not destroy our selves but we also might. So far the pollution of AI is more likely to destroy our civilization. We do not know the risks.
youtube
Cross-Cultural
2026-01-01T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy2excLhBQtKFSIwSl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxxlVxuRpdQKDVP2RB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnvWf7qNfA7yM3ZQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwh8bQJ9gt_L-S72JN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwPDTsouirRPEcINU94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBlbvDKtZ61znz9WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxi9xdQl7-OOkoexup4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwqr2LMcAux374rmzh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5CohfKzud9KEl69V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy4dPz8xGcVoOcKEGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]