Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The argument against AI being more accessible doesn't really hold. Yes, a smartp…
ytc_UgzkxGeqF…
G
And they shall be sheep, we will herd them to virtual shackles, tell them they a…
ytc_UgwNE3o-_…
G
Fun fact, Most of AI "artists" on Twitter are bots, this must be a bot because o…
ytc_UgymaIfFS…
G
I have attempted to do things at my job using current token limits and the inter…
rdc_mzy4upq
G
I talked to chat gpt. He said it is fake. Chat gpt and ai has many restrictions…
ytc_UgxMNFFR3…
G
the whole point of art is creativity. these ai generated images are not and neve…
ytc_UgwWO2RSj…
G
It has been proven that governments could be more dangerous than any private ac…
ytc_UgzbCfyWh…
G
i just use ai to see my ideas come to life, like I have a character named Angelo…
ytc_UgyLwTX2j…
Comment
Only comforting part is, if AI turns on humanity, techbros will be first to be killed by it.
There is no way for AI to develop, that ends up well for regular people - simply because people in charge will keep benefits of AI to themselves, and give everyone else only downsides, like jobs vanishing out of existence.
youtube
AI Jobs
2025-12-23T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwvkE8SkdYEemb8Nat4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx6kPiETRLq5MxHH4x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyT_G0nZ8f5NeA4bRp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzgepflg4zWkZZvlBh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJzx3KPvPELBJyowB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwOTNfN0fOhvucTxk54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlcwF3yhAzgrriM1l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEXGK2stgqgN0k_8V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzAWE7ebrGjALtHLD54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLtvxA8zMA7gepUb14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]