Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing is, the way current machine learning works, it's largely just snarfing up a gigantic corpus of (hopefully) human writing and making associations based on lots and lots of memorization. Expressing fears about losing work to AI? That's the collective consciousness of people on the Internet. Those are probably built from Reddit comments. Sometimes it works, and sometimes I ask Gemini to make a Star Trek quiz for me and it claims that Mirror Universe Captain Kirk's middle name is Terrance. I THINK it made a false association between Terrance and the Terran Empire. (It's Tiberius, by the way, just like Prime Kirk. It'd be fun to say it's Reginald after the James R. Kirk tombstone.) I think the creepy thing about LLMs, when they get things right that is, is how despite being neural networks, aren't really all that much like our own brains, and yet can often fool people into thinking it is. The problem here is that LLMs don't seem to be that smart, because they're purpose-built for memorization. Not sure why companies are trying to replace search with AI, when Grok of all AIs seems to suggest that hooking an AI up to a search engine and having it aggregate results on the fly, is a better solution. I don't know how much research has been put into trying to implement emotion on LLMs, either. Current LLMs seem to lack the ability to say "I don't know" when they should have low confidence in results.
youtube AI Moral Status 2025-06-19T15:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzQRsgKyP3X3Wf_Fe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwrsBfZCkJREZpgdIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzO3OG1RhVsDD-pN7N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxqM29CpqwmO7G867N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwCOh0vYtx3npl7XJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1iTQXjrBa_ahqgvt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQX89Iq0cWsdfZ32J4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzH7Fnks1HlGgq7vQV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6kGJhr1Dgzn5Vk5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz65DbIT5JjevlnKzF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]