Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so if this robot can get aware, this mean that this robot can oppose to your ord…
ytc_UgiaX08nb…
G
What do you call someone very intelligent, with no conscience, and limited to no…
ytc_UgxC3MegP…
G
this is how false premises are not identifed by AI. God is NOT GOODNES, exept by…
ytc_Ugy2B014T…
G
For humanity sake… I cannot believe the arguments that the pro AI group makes… A…
ytc_UgzUXE2d9…
G
Could be a period of uncertainty that will accompany the development of artifici…
ytc_Ugyws5GHV…
G
Claude Code is good, especially with the Opus model, but it's so expensive and t…
ytr_UgyO2So82…
G
Mr Sanders. It has occurred to me the easiest job for Ai to replace would be th…
ytc_UgyHnlcf-…
G
The term AI is disingenious. It's literally just mathematical Machine Learning. …
rdc_mnpc2s6
Comment
I really like Hank, but this episode wasn't it. A whole lot of nonsense was taken at face value even though I MOSTLY agree with the overall premise that actual super human AI would be very bad for the species, LLMs ain't it, but it sure does sell books (and get investors investing) to say the LLM is plotting to prevent it's own shutdown rather than the obvious of the LLM regurgitating one of the millions of AI stories it ingested about trying to prevent it's shutdown.
LLMs are stochastic parrots, not even "reasoning" has stood the test of time and has been pretty conclusively proven through white papers to be bullshit.
youtube
AI Moral Status
2026-01-16T15:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxoBKYHDOlWXw2ukhl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8PsmMoMXCDsLLgpN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnM7NPE-qPsMeK-fZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzQehEonz8RsHB83Fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyeki81sRIbPn6AJZh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYNq3S3jmPlxF6O0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4-BKGrUI2xFRkQj14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWvNBxMQ_SS3fX_-94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlT3GJxYFK42tj9fF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlmMjeDWzJMao02OZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})