Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai isn’t as good as what these youtube headlines want to make people believe(at …
ytr_UgyMEvcno…
G
The problem is that machines aren't paying consumers, which means when you have …
ytc_UgxWMFozT…
G
The mistake you're making is the assumption that everyone puts their party affil…
rdc_f6132qa
G
heres what i think of all types of drawings
1. normal (on paper)
if you can draw…
ytc_UgxDOISNU…
G
You wish. lol
AI will create songs and movies that speak to the hearts of the m…
ytr_UgzglPjmN…
G
The only way AI could help you make ACTUAL art, is learning from it's art…
ytc_UgyXqgn2G…
G
People are morons... super Ai already existed for 1000s of years, it's called hu…
ytc_UgxQjlaKG…
G
@codingsafari well it depends on what your asking it. You’ve got to know how to…
ytr_UgyJ4BKLM…
Comment
Great video! I just think theres 1 flaw with the idea that "predicting something humans created often requires being smarter then the humans who created it" (at 47:05) - its more apropriate to say 'more informed'- it likely lacks reasoning but have much more data including statistically wich leads to so often guessing right. Even our experts have only so much data on their own field much less on other subjects...
A complete newbie could win against a champion at poker by knowing more (like knowing the opponents cards)... also knowledge is specific- a jaguar could very well predict what the next turn a fleeing human may make, not by being smarter then a human but having more experience(data) on chases.
Its very easy to blur knowledge/information with inteligence but theyre deep down different things. We collectively relate both because IRL they often go hand in hand - and the very act of studying/learning helps buildup ones faculties, more varied neural pathways leading to better inteligence- but therye still separate things. The smartest person to ever be born in front of something they never saw would make stupid decisions, while someone really defective in their inteligence could have memorized everything about that subject and also fail...
Llms lack self awareness, they cant guess right something complex in one moment in the very next fail to realize a very stupid mistake even with researchers drawing the explanation to then. Hence why a good neural algorithm may predict something very complex like weather patterns even if it cant really get aerodynamics or why it behaves as it does- massive data.
youtube
AI Moral Status
2025-10-30T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1_ez-0vl8tEvhPGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzM4bqngjE5_ib5sKJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxcTb6i8AUGg19T2n54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPfOmj4m_Aube5q4J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxE54jNX8p3yYjG0W54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyi_QHZ-dhPQu0-UFB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGk8_0HvVBwUZdVCJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQLyHJl3d48kzDxI14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe5HXo6jXaynqJ0ZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxiMiO945P8eZMsdu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]