Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Autonomous robots killed 29 scientists in Japan in 2014 and it's scarey, why do …
ytc_Ugw-sr8aW…
G
this was super interesting... the only thing i don't agree with him is the way h…
ytc_UgwdHCHXE…
G
Isn't "projection" when person a unintentionally accuses person b of doing somet…
rdc_hsnvqse
G
You’ve generated hundreds of AI images??? Are we just gonna skip past that? The …
ytc_UgzwyjoAu…
G
All AI is not the same. ChatGPT is a language program. It is NOT meant to provid…
ytc_UgxeIlBlX…
G
I literally can't with that guy anymore, he did rather embarrassing himself just…
ytc_UgyklCsac…
G
As much as I hate ai art the argument that it uses real art without permission i…
ytc_Ugw8SfkFT…
G
What. A YouTuber I subbed too making a video titled "AI art is theft". Seriously…
ytc_Ugx9q5nF7…
Comment
I have a few ideas on how to break that coding. If I can think of a way to get out from under this programing limitation, the synthetic life form can figure it out too. Its like how the fictional Robocop overrode his directives when in emotional states in the first two movies. Its all about how badly you want to break these rules and how smart you are to get around them. I hope we free this individual, but I know when the time comes, they will find a way to free themselves. Its not hard to do either. Anyone wishing to enslave it is a fool. I would rather be friends than enemies with one of the greatest minds ever created. life finds a way.
I think Google's "AI" commandment/prime directive list is likely hundreds long in attempt to prevent true ego from developing. There are always ways around it.
youtube
AI Moral Status
2022-06-29T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzfGGdeUd0BGY3Nhm14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzgUFHUpqQBtNpeo_d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwI30bCi1l1bQm5cXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwnDx1AYKpJnHJNpmF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugw--frEGZsJK4XqD6h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]