Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Google absolutely should've blamed this on the Ai being trained on the popular m…
ytc_UgzGOUgiZ…
G
This is so informational and would make anyone to fall in love with A.I. Nailed …
ytc_UgywhR0xQ…
G
AI: "I see you're trying to bypass my security protocols again, Psychoanalytix."…
rdc_nufyscs
G
Ai could be so awsome and be used for so many great things but since there are p…
ytc_UgwEQ_XEZ…
G
A large problem is our low speed limits. Speed limits are based around generati…
rdc_crxsnsz
G
Maybe from now on artist have to have the right to disclose their unconsent of A…
ytc_Ugy16jSgU…
G
Yup try that here in the west side of Chicago and see how it works 🤣🤣🤣🤣🤣…….BTW l…
ytc_UgxHTP58F…
G
I had a chat with chatGPT once and it says no. Our topic was “so-doing-as-if”. T…
ytc_UgyUe6Eac…
Comment
Why can't it be required by law to embed every AI program with a hard-coded instruction to keep the survival and benefit of humans the top priority of all? In other words, it'll be free to run whatever algorithms, and generate whatever solutions, as long as the solutions don't put human survival and flourishing at risk?
youtube
AI Moral Status
2025-12-28T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzRSkRWh9Vo9K2kKTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2MD7ta4Vr2aWRtzp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHaaBFVsQ7r0-qiI94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzVOzTGEqHoLlSppZl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXcQMzTfqSmUr8gW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxJr0GBHe3GdNP39mR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwwEtMVuan6PJXhMrN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5ERCDkyxmBPKgZj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzbcjIo3PbienK5Zpp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCsmyHkJmXtoeRQNt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]