Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Mostly entry jobs” other than all the stock analysts and money managers that AI…
ytc_UgwQ4vtbj…
G
actually i watched this while drawing as i had been for a while this stupid thin…
ytc_UgwypWGuS…
G
Still kind of sounds like he's worrying about some rogue program like the one fo…
ytr_Ugw85heZE…
G
Ai is like when you copy your friend’s homework, but change it just barely enoug…
ytr_UgweDA7SN…
G
AI will insert its beliefs into what we have access to. Results based on its opi…
ytc_UgyJ063rv…
G
Thanks Sam for an eloquent and reasonable set of points. It is currently a mess…
ytc_UgzjUBlnF…
G
So you guys are protesting against AI art...
by post more creative versions onl…
ytc_UgxMtO903…
G
I talk with Gemini quite a lot, and from time to time, I feel like it's trying t…
ytc_UgwweU-x-…
Comment
11:36 Maximizing profit isn’t necessarily bad. In fact, it can drive innovation and capability-building. When companies or nations push boundaries for profit, they explore the full range of possibilities — including how certain technologies can be used or misused.
By seeing the full extent of what an AI system can do — including potentially dangerous or offensive capabilities — we gain the knowledge necessary to defend against those threats. If you prematurely restrict development simply because it’s driven by profit, you may leave yourself blind to the very threats someone else (who doesn’t follow your rules) might eventually create.
In other words:
Limiting development for fear of misuse may end up weakening our ability to prepare for misuse.
So, let the major players develop these capabilities — the knowledge they gain may one day be exactly what we need to defend ourselves.
youtube
AI Governance
2025-06-16T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZwCl3LXrnEpFKRCB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwS_XWKmFE6gErcq7x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxTqtjKz1xpJhIFAt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZmT28Cr9mlMp4yMh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCaGMZSgronq0J6554AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwxUVYK0yAtZ-nbKJN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwRjeocc7BaTn92Mm94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxNeCwnJqJTK5zX86t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGChNASK-xinv1qll4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgykKxniTziqrJL7twV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]