Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its obvious now that something as powerful as AI is slowly being taken away from…
ytc_Ugzanz6SU…
G
Answering an AI Thief with a tutorial on how to draw in your art style is absolu…
ytc_UgzFnlBur…
G
In my company they're experimenting with some teams what is gonna happen when th…
ytc_UgxjMgtA2…
G
Not surprising that the ex-Google CEO is gaslighting and out of touch. Saying th…
ytc_Ugy841Yzn…
G
"a direct match" with absolutely NO SOUL. Like, damn, that AI generated one real…
ytc_UgysWgZyq…
G
I don't know where to start. You really don't understand where the real solution…
ytc_UgwhJfQ6t…
G
as an artist my only problem with it is, people r gonna be using this and callin…
ytc_UgwpazacH…
G
@whynotcode I’ve been coding since I was 10 in 1982. I use the latest AI models …
ytr_UgxNcXRb7…
Comment
I appreciate the counterpoint at the end. I think people need to be doing more to stop the harms LLMs are causing right now. People can either use fear of superintelligent AI as an argument to regulate it or as a way to shift the focus away from current problems. And my worry is more people are doing the latter than the former. I think conversations like this are important, but we also need to talk about Meta's regulations for its LLMs that explicitly allowed medical misinformation, images of elder abuse, and inappropriate content depicting children. It's chilling, and more people need to know about that too
youtube
AI Moral Status
2025-10-31T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwrLmycNPBt6-Jy9_t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_B8qiu32XqdMW0bd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2qgHGv3OEnQWYM6x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxJq66Dr8wK6u1VAyt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzsm1c33eOVut0mgC14AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx5USNcAfrt868_Awx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcMawkbb6I8kgrcZN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeO5pukZ_tjKUncdJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_AZ-7DzqLMgm6wWB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwLlGY6aAQe6o9LOSl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]