Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not a good graphic artist, I much prefer to draw with words (in my mother to…
ytc_UgxeDBUui…
G
AI is not always correct either. It is a collage button for the lazy and the pa…
ytc_UgyUVaLCa…
G
People need to start seriously pushing back on government officials and tech com…
ytc_UgzV_EF9f…
G
Conclusion: the godfather of AI is just now realizing that he has fucked all hum…
ytc_UgwMzBNZJ…
G
Thanks for challenging the argument I hear all over social media "Only non-artis…
ytc_UgxekfbgS…
G
I’m not even an artist. But I love drawing and I’ve spent years learning with ot…
ytc_Ugw6HGDgs…
G
I do think it’s good practice, in case someone DOES figure out how to reverse ni…
ytc_Ugxne96JB…
G
My shitty company enforces a rule that those who do not use the company’s AI too…
rdc_lqqzxfp
Comment
AI (as in LLMs) are dangerous because they can easily produce and promote advanced malware and be used maliciously by companies like MSFT who simply need to ask "So, Copilot, give us a summary of what EK ID number so and so did the past week" by asking their AI to give them a quick succinct summary of what Windows Recall info sent to their servers about you. They make malicious spying that much easier (among other similar problems).
youtube
AI Moral Status
2025-10-31T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDlAQpJvFbgGM4r4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjUsv4wUBOyvwEwRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwaha5FvqKpPn5hTL14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQ7alvMqtC2j7XWyN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyBNlFBAtH6vnIH_7F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxD7d65FwNleHg6ndh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxdJNz5J6OmXBHtUHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzc0TaJYKf3z5Gpc6F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDSWrHCmEbQb6BGxp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzeml3bGYbm1250Kup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]