Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Being a person of color automatically makes you a suspect. Technically they shou…
ytc_UgzPxfbIy…
G
Bible prophesies must be fulfilled. Life gets tougher, not easier. Till the retu…
ytc_Ugw0rczTA…
G
you should consider how the music industry has tried to negotiate with the likes…
ytc_Ugyp8mOam…
G
I’m curious if AI is capable of insight and if it can reason without being provo…
ytc_UgyKeDntn…
G
This is dajjal system. Antichrist system at its peak.
Technology harming the pub…
ytc_UgyAeYL-t…
G
They were so stupid about it. Use ChatGPT to help but you still gotta fact check…
ytc_UgymYFJUB…
G
He's a fan of sci-fi but doesn't mention the book, I, Robot by Isaac Asimov, whe…
ytc_Ugys7Q9Na…
G
At this point it's very unlikely any sort of AI will destroy us by doing a Skyne…
rdc_kqt8qxg
Comment
I'm still not buying it. I mean, I can tell how AI works, especially with art and video. It's just kind of jigsawing together shit it was fed on. I can't imagine the language model is a huge jump from that. There's truly not a whole lot of ways to put words together in any one langauge before it just turns into nonsense. Probability is the thing being trained. all of human published history is the example. I always think of the Chinese room thought experiment, which was originally used to show how silly and unremarkable the Turing Test was. Creating sentences that sound like a person could have written them, regardless of if it's true, false, nonsense, and can't be substantiated or elaborated on just isn't the break through these folks want investors to think it is. It's the 21 questions game. It's mad libs with numbers instead of letters. It's silly to assume something we would recognize as intelligence would ever rise from this meta-tag, guessing game. We still don't understand how we evolved our level of consciousness.
youtube
AI Moral Status
2025-10-31T09:4…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxJe79ZRUS_9eOtP1J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"tragic"},
{"id":"ytc_UgwIwA9d-TJy_ELMYyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOoxXxs2Faj-YeX7t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUQ0LirlntAuUax754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyP8A0kx6ACM3bg6154AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxHfjosJT57L4hmvkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzubdM5fyd2xONljAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTErbDjoVi_FI1WVd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx59i9jiCN5KkOb2ll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzp8ZqUv3SNaAnW38d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]