Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
WATCH CLOSELY WHEN YOUR WATCHING THE DIFFERENT SHORTS ABOUT THIS ROBOT * THAT LI…
ytc_Ugwe_bG7q…
G
i feel that for a robot to have consciousness they need to feel pain sadness and…
ytc_Ugjs7Uuup…
G
When you get down to it at the end you realize it is a Childish mental exercise.…
ytc_UgwQZIwDN…
G
This is part of the reason why I rarely post my art online. I don’t want my art …
ytc_UgwtReb48…
G
There is some good information here, but a lot of it (for now) is unquestioned d…
ytc_Ugx7JG7yn…
G
Is it just me or did ChatGPT misundertsant the question about the five people wh…
ytc_Ugz7t2JIA…
G
The difference between a human and a robot is that the robot is programmed to tr…
ytc_Ugw9r0AZg…
G
Ok let me ask you this. Imagine that a company raises a kid from birth to be an …
ytc_UgxBTZ-ie…
Comment
I agree with Roman that safety is crucial, and we do need to focus more on deploying AI in a safe and responsible manner. At the same time, some of his wilder predictions can’t really be substantiated. I’d love to see more critical perspectives on these claims, especially since GPT-5 has clearly shown that an LLM alone is not making the leap to AGI. It would be great to have guests like Ed Zitron or Gary Marcus on the show to challenge these narratives.
youtube
AI Governance
2025-09-04T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgymQejqPY0vbczVQth4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxS86-05T9bvH3peid4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmlZbCzn2ft0Zrx2p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAqyEg4nVc8Xs-EAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8y7g_dCnDcJFTvel4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyx6Q4hR4WAbLIT_zh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxw6sauUEBGK-vN9M54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFyGo1Wo7ZNdIQKcx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7Uu1cdJD8ppR_sv54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0-s_OeHg-zFymm7Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]