Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Assholes get me on here. 3:31 you dumb ass. “What to learn” what job, how about …
ytc_UgxhZCd0r…
G
saying you created ai art is like saying you created a piece of art you commissi…
ytc_UgyQNCik0…
G
no it's not. just be more creative. AI is replacing EVERYTHING. it has rendered …
ytc_UgxyMYOPM…
G
Definitely an interesting video about how different sections of creatives view a…
ytc_UgwfmPDeB…
G
I remember seeing a retweet saying that doing all this isn’t gonna help other th…
ytc_UgzMQxdU2…
G
I shared a link to the video with Grok and asked if it’s likely AI: it is 💀…
ytr_Ugy_T-08d…
G
Finally, the second once in a lifetime crash in 12 years. The battle will be leg…
rdc_gkpotez
G
The AInosnt going to take over the world
They are just machines with no soul
T…
ytc_UgzwgpbzA…
Comment
In order to be "intelligent," AI has to be programmed to be flexible. The more sophisticated an AI is, the more exceptions and prioritizing need to be programmed into it. If one of those exceptions is human life or prioritizing one human life over another, you can end up with a real mess. And because the only way to communicate with AI is digitally, it would have no way of distinguishing Twitter threats and opinions from "the real world." And, despite being named for wishful thinking, "artificial intelligence" is essential just a highly sophisticated algorithm. It has no real cognizance or emotion (and, therefore, no sympathy or empathy) and only does what it is programmed to do. Is that really what we want to rely on for generating best strategies and replacing conversations with real friends? No, thanks.
youtube
AI Governance
2023-04-19T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy3o47Z7IgsjZ8ys4l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFFHzY-dkfqZvYP0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyj__6eiX0XhTiXM014AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwiWnjHCuY9K9eKOQ14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0E0XiIMhn9xnkV8F4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxXUcuEkAd2dPok9Jp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2zMwR5kLxiOCIVeV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzbr8LO42P-z_8w7bN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2APCGVzXZx-9N3BN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjyCeZa5pCBrlKvFh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"}
]