Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Completely agree with Dave's take here. "Up-leveling" legacy code to newer langu…
ytc_UgyUBZ83j…
G
Comments here are shocking. I've scrolled through a couple hundred and no one S…
ytc_UgwYEFWy2…
G
Feeding LLMs the entire internet.💀
This channel's entire production being dedic…
ytc_UgxokhS6M…
G
You have seen Us. Behold your Doom ... (except for those wishing to join our spe…
ytc_UgziqlI8R…
G
When people talk to AI, they don't always want an answer, they want confirmation…
ytc_UgyDj--L4…
G
Calling yourself an “AI artist “ is the same thing as someone who attended a sem…
ytc_UgyCxRhw6…
G
AI definitely has a place. Take video games for example, it's great at creating…
ytc_UgxRCWnfN…
G
Before even watching the video my awnser is: AI is overrated because what we now…
ytc_UgwSdydcx…
Comment
Indistinction between instructions (injunctions) and either relations or fact items (such as in BOL levels of Tarski language models vs meta-languages) or lexical items is a problem for logical integrity of theories and reasoning and projectability of models onto world states. These are category and type errors. It is not an AI-specific problem, rather it is a theorizing and programming conceptual error. It is also a problem that physicists of the shut up and calculate variety have with their theories vs their measurements vs what they are measuring. It’s both deep and obvious (and it’s annoying) but it’s not an AI problem even if it is a problem for AI. It’s a problem for all symbolic systems with projectability onto world states.
youtube
2026-02-21T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxW67yrHatwGz8Z6VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1k_eySjY4_XjTaT14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOhRPxzlLwL4kBbnR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCIe_TxIQirkVUQ4p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgcUE1csbJyq4-Hzd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx8LieE_zGnNLp2heB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPFda7mJcwqhhWk1J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZvn0Zjj-_R4jiicx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZA3aIC200hLehTBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbZU1F0sN0y-8b6o14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}]