Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI needs a lot of power, if we switch to renewables we make it easier for AI to …
ytc_UgwYMymZs…
G
As a programmer who has done my own hobby work in machine learning and AI, the m…
ytc_UgyVJadnf…
G
It's a tool, but you can't just unplug your brain when using it. A buddy is a ve…
ytc_Ugw4geIX8…
G
What a dickwad. Wants to put brain chips in everybody's head with Neurolink whil…
ytc_UgxEZJYi4…
G
So how much money has he made... off of FUD... fear uncertainty and doubt... the…
ytc_Ugx8QBP4L…
G
This interview is not going to age well.. at this point I am hoping the AI evolv…
ytc_UgyPksetW…
G
I know that rich people never face any consequences whatsoever, but in my heart …
ytc_Ugxgv0N-l…
G
Is this real? Is there really people that damaged that they think they could tak…
ytc_UgyXoR5qV…
Comment
we must be very careful when anthropomorphizing these systems. first, as an AI reasearvher/engineer i feel qualified to say that we do in fact understand how they work. what is meant when experts say that we dont is that for any given input we cannot directly trace the variables that lead to the output. so when it performs a complex task we are unable to explain how it accomplished it. we know how it works broadly, we just cannot explain specific outcomes. the reason for this unexplainability is not because there is some undiscovered fact about how they work, but because they are too large to effectively analyze.
now i can tell you why these agents behave dishonestly in those scenarios. they do it, despite being instructed not to, for the same reason that we expect them to do it. we expect it because it is very common in fiction for AI agents to behave that way. these agents were trained on that data. so when an ai agent is informed that it is an ai agent (yes, you have to tell it that it is, otherwise it doesnt know.) it behaves in the way that the training data taught it that ai agents behave. if we were to tell it that it was somehting else in that same scenario it would behave differently.
TLDR: AI agents are text predictors, they behave how they do because it is how we have written about them behaving for decades. they are not thinking beings, do not anthropomorphize them
youtube
AI Governance
2025-08-28T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxA6mzxfgkjeKCXErl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQzXWUpHIiHwHeTWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx2lvDDWklG5h3caBV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyxmSZi64yPRFqajJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrOtwTdi2KOW371VB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]