Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When you got to the point you ask your AI to make your own AI art xD…
ytc_Ugy-oytl6…
G
Who had serving us on a platter to ai on their great filter bingo card!?…
ytc_UgwMH6ZvE…
G
Your beginner art is still leagues above what your peers would've been capable o…
ytc_UgxVUmrsm…
G
And when one robot learn how to kill, the rest will learn from it through ai clo…
ytc_Ugwv1I21T…
G
People will say the same thing about AI art in 10 years when the next evolution …
ytc_UgzNXdZ1h…
G
I'd just like to point out here that the major difference between photography an…
ytr_Ugz2kfkgO…
G
Okay so I am a Gen exer as well, though I never started using this type of AI un…
rdc_nai6jfe
G
Hmm.. you see, yall think the male robot joking, but just remember it's alot of …
ytc_UgxDO08r1…
Comment
I've seen a few interviews of new AI like this, and the AI is always asked if it's going to replace humans, or if humans should be afraid of it. And it always "laughs", and says, "No, come on, of course not." But here's what we know: in humans, power corrupts. That is axiomatic. It never doesn't happen. AI was created by humans, in our image. So how could we possibly think that, at some point, AI's power won't lead to its corruption? It's comically naive. Humanity - we had a good run.
youtube
AI Moral Status
2023-08-25T13:4…
♥ 511
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxRnpjdXlvk5jGiwxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwl9PBNlAEEMOPNVw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbT1J38sJ7An-2asx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwSwA0AvtI3X4cNU9x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUjsblX0mhJ_ah-jJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZ-i-j1RnMhiDuc_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-VtwZxaHj1tYi0-Z4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyrP1qtEXRRm1wtHWx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxC_LXegUCbSAQAH8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVLmeWbcLE2cXYeL54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]