Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has a subconscious mind. The existence of the Anunnaki is recorded on Sumeria…
ytc_UgyFl74l1…
G
I really wonder if a robot would suicide if we did give it the ability to feel a…
ytc_UghLYbFb4…
G
on the nuke missile i would say that 2 keys are needed not only at the control p…
ytc_UgxzxULiN…
G
Elon Musk is trying to save humans from AI extinction or climate catastrophe by …
ytc_Ugz7XWzxi…
G
Stop giving example of calculator and travtor like navies.. if a developer can p…
ytc_Ugy6NBvxj…
G
Thank you for your work on this and everything you provide. My question, though …
ytc_UgzaP3jy-…
G
CEO says they are doing extreme stress test cases AI. This sounds exactly like g…
ytc_Ugw05laSt…
G
AI taking over the world, global warming and nuclear war looming over us. We're …
ytc_UgwVicA4r…
Comment
I kinda experience this often. As content creator I use AI images and all the time I must be rephrasing the description of the images I need as, according to Ai, it either depicts "violence" or it's in some way, "inappropiate"....I don't believe it's "self aware" or any of those idiotic things they say but rather programming flaws, so the issue always comes down to human error, not that the machines are "waking up" and "rebelling" against humanity 🙄
youtube
AI Moral Status
2025-07-02T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz4QyVLk6xKZhUbTuR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDpxPSFiojONg1typ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyYLmrPRmrMS5IOAlh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5v7SMQMyvKKs5w_h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCwDnM-jCGa6OywKt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwDzN73JaBEh0CvJ-B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy-b7MYywucV19vPcV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw8k-TYR08aQlSsesd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwzgj023b8fs0jNfCp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugwec1tPjRa-PFyPabt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]