Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now I know the beauty of your channel and this video prove it and it remarks I'v…
ytc_UgxeA6gd9…
G
Me. I’ve been training to be a Graphic Designer for a few years, with the intent…
ytr_UgwyiPC9X…
G
That's dumb as fuck. ChatGPT isn't a person and it doesn't learn from users just…
ytc_UgwjAkGUS…
G
AI learns - see "machine learning" - from datasets, and applies that learning to…
ytc_UgzXJ5mWQ…
G
I think the name was chosen so that the full moniker would be “anduril industrie…
ytr_UgxWbfl5t…
G
You mean, it is going to be worse than the self-generating YouTube AI slop?! Ple…
ytc_UgwsmrfR8…
G
Honestly calling them artists is the nicest thing I think anyone could ever say …
ytc_UgzTTHRdD…
G
Here from 2 years after deepseek solved all those problems and sora 2 is out…
ytc_UgzckLmWx…
Comment
For every conscious moment of human awareness there is a feeling. There is a contextual sense, a sensibility in reaction to the moment. We identify and name these sensable moments as emotions. There are 600 distinctly identifiable emotions and these will provide a reasonable and foundational pretext for building/teaching each successive LLM (Large Language Model) integration/learning approximating human intelligence. With emotions embedded, as in a digital schema, next we add a full variety of contextual sentient human experiences for each of the emotions with notable preferences included e.g especially acceptable and appropriate preferences like good, better, best and ideal or worst and avoided all costs. Best and worst case scenarios with emotions hardwired into the human intelligence approximation system.
Not only do I believe this is the place to begin but I do believe it is the essential and logical or reasonable starting point. Innate in each emotion state is emotion recognition, identification and physical expression...
Q
youtube
AI Governance
2025-10-17T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7wbM4Zq_9jHV609d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyYqiMfkIH_xZGslIt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz9IxIirpBhfxLgmgp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzkqAQWZFl0wpZkIux4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQINuhqkN5Rhnjcah4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzuiCrP89wKMdDD8ol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1JxtoK1ihz5ckmb94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKKAbBYahk3Ro-r5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwZf7SVXR8Zc6NFkdR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEyMY9jNhT8nybMl14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}
]