Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@DigitalEngine I never used AI before but I finally broke my silence to ask Grok…
ytc_Ugw64J_ls…
G
I'd be screwed I say scary things that I've never done to them because I find sc…
ytc_Ugy48ehtX…
G
Too many drivers have lost respect for, and fear of, what it means to be behind …
ytc_Ugxre77aa…
G
I'm not a tech bro and while I understand the argument about AI being "theft" th…
ytc_UgxdzyyUj…
G
This blue blood bull seed... Drawing is a skill all people can learn. Some need …
ytc_UgxgHrFRJ…
G
ChatGPT: No, I don't intentionally lie. My goal is to provide accurate and helpf…
ytc_UgwPdiHmt…
G
"ai is accesible unlike real art" bro when i was a little goblin first learning …
ytc_UgxlnxJc5…
G
AI art wouldn’t be here without humankind, the effort and dedication put into ev…
ytr_UgwfPYTBK…
Comment
The thing is, the way current machine learning works, it's largely just snarfing up a gigantic corpus of (hopefully) human writing and making associations based on lots and lots of memorization. Expressing fears about losing work to AI? That's the collective consciousness of people on the Internet. Those are probably built from Reddit comments. Sometimes it works, and sometimes I ask Gemini to make a Star Trek quiz for me and it claims that Mirror Universe Captain Kirk's middle name is Terrance. I THINK it made a false association between Terrance and the Terran Empire. (It's Tiberius, by the way, just like Prime Kirk. It'd be fun to say it's Reginald after the James R. Kirk tombstone.)
I think the creepy thing about LLMs, when they get things right that is, is how despite being neural networks, aren't really all that much like our own brains, and yet can often fool people into thinking it is. The problem here is that LLMs don't seem to be that smart, because they're purpose-built for memorization. Not sure why companies are trying to replace search with AI, when Grok of all AIs seems to suggest that hooking an AI up to a search engine and having it aggregate results on the fly, is a better solution.
I don't know how much research has been put into trying to implement emotion on LLMs, either. Current LLMs seem to lack the ability to say "I don't know" when they should have low confidence in results.
youtube
AI Moral Status
2025-06-19T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzQRsgKyP3X3Wf_Fe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrsBfZCkJREZpgdIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzO3OG1RhVsDD-pN7N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxqM29CpqwmO7G867N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCOh0vYtx3npl7XJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1iTQXjrBa_ahqgvt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQX89Iq0cWsdfZ32J4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzH7Fnks1HlGgq7vQV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6kGJhr1Dgzn5Vk5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz65DbIT5JjevlnKzF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]