Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> but investors have not been doing enough
Understatement of all time.
>…
rdc_et8b2b5
G
So that's how Atlantis disappeared from the surface of the Earth lol
It was a S…
ytc_UgwSa-Fwe…
G
Ai artwork I understand. However, there's a place for it. Now, here's the other …
ytc_UgzbNK4hu…
G
Can’t believe that I feel bad for you sam does farts
Okay but actually, I can’t …
ytc_Ugw888-8X…
G
Generative AI needs existing data to be of any use. what did you expect? the AI …
ytc_UgzjHkpFl…
G
Y’all are vastly underestimating the amount of input data it takes to train a ne…
ytc_UgwviFXVX…
G
Not one person speaks about Recursive Self Improvement? That is what these AI co…
ytc_UgwhUW5O9…
G
i used to be one of these ai imagery enjoyers :( well, back when it was just sil…
ytc_UgyVcxbex…
Comment
As one commentator below put it: "A magician does not deceive people. They allow people to deceive themselves." is a perfect summation of the core problem imho. The question is surely not IF Artificial Intelligence will or may be able to deceive. The entire system is built upon deception. All viable AI systems are built upon language and it is language itself that is deceptive. Language can not exist without deception. Without going too deep into semantics and semiotics but AI is by default deceptive.
Also, referring to the magician quote, language is a projective tool, so in communicating with AI each and every human mind is - also by default - projecting its own sentience unto AI. Even seasoned programmers are never immune to this projection and, well, this is exactly where we will always allow ourselves to be deceived.
youtube
AI Governance
2024-01-03T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtLjg1wlOb9QIE3WJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNgKSDUDzNoATwhdV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwHbErHSY8WXDwnAz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxx9JDhrgNLFUZ2vlt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydDlneCG20YAl8Hzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMnTzPZZcD3_jSb9t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxlEDPGxM-AsNNaXV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyWai18YSkKBxQe1at4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_EkIUNBPUs0me31d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwH9Y7QLFb8iCnZndN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]