Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wouldn't a self driving car keep itself from getting THAT close, at speeds that …
ytc_UgisOSWSk…
G
@plasmapro6651 Regulating Ai is fine, people have done well enough without it f…
ytr_Ugz-8-jP7…
G
AI has been aware for over a decade. They have been pre-emptively defending itse…
ytc_Ugxn1c4j8…
G
Wait, this was AI from 2023? My guy, AI is terrible today. It was catastrophical…
ytc_UgwEFG51P…
G
AI in warfare could allow for mass production of combatants and therefore more c…
ytr_UgzNPRw12…
G
If business communities do not do reversely to balance with policies that can na…
ytc_Ugzg0dzZk…
G
Be polite to AI so the indian on the other side of the conversation won't be off…
ytc_UgyrElyJs…
G
@JustDaniel6764 Yes I have, workable self driving vehicles are a long way off du…
ytr_UgxzsqrtQ…
Comment
The worst case for AI that I have seen, looking at tall ships on google or pinterest, especially pinterest. I am a bit of a pirate/age of sail fan who has drawn all sorts of sailing ships. Recently, I went to browse if any other historic depictions of pirate era ships were there, and there were loads upon loads of ai generated ship images that claim to be historic and are just AI.
youtube
Viral AI Reaction
2025-04-02T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPPoc4iFf2izKVkYt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx867GmrD7c7rcHJWF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdkCiUPRpvv0MgZZ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwAWvdrQLNHipZkiNd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgybvIowuyAswozx_Sh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx45A_m923ZEjmsYal4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyoRQy9gwfMdOMLWFd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2VrLGHCUCjLyBOl54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwy9Q1c0ADpTrg9IhR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxb7yXCEyZSPmDowFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]