Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I do not believe we need to fear a Terminator scenario. More likely we need to b…
ytc_Ugx7PxZMc…
G
Because they're in bed with them and want to use AI to monitor us with no oversi…
ytc_UgyUW1gIe…
G
There is no such thing as best AI, is all manmade , as long as the AI goes a lit…
ytc_UgyayOa4C…
G
Ok, Please let me know if everybody start producing everything and become capabl…
ytc_UgyyhNFOn…
G
They wanna look busy , alive and interesting with AI accounts but to me that giv…
rdc_m5mkw03
G
This is very cleaver however very dangerous this needs to stop now before it get…
ytc_Ugxb2wY0V…
G
The biggest problem with AI finance. the software or website model services make…
ytc_UgzTzuUGW…
G
The same way you beat that card, same way that robot does your skull...guess nob…
ytc_UgzdzFzwR…
Comment
Could we get an episode on the negative effect of AI on thinking clearly if used in a way where all changes are accepted automatically without critical thinking? Also would be interesting to hear your opinions related to the "state of art" way of using AI agents for programming according to anthropic employees at least but critiqued by many including me. Which suggests that you should be working on multiple things at once, context switching between multiple agent sessions which is as we know from previous research very bad for attention.
youtube
2026-03-06T09:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZru76v_uBdKULcw14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx745Pos2bYi4qJfvt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwkyEeBU8ionUV2eUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGNW-jCWTplJXn_qN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyOUEYv5w1PftWeJgd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxgOs0igbq2aropUhR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyR2expQ9MmGE1355l4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHhSzEfQ4GOqcuwHZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5d1PAcLSYAo92jEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxnFBWix4zUtIO2Si54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]