Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@machematix when I can smoke real budz with an AI I’ll agree we’ve made artific…
ytr_Ugye8OjxR…
G
One of the issues I've seen is that it's not just replicating styles. It's takin…
ytc_UgwgxeWhY…
G
Ok you can still learn things in a world with ai. You can probably learn more no…
ytc_UgwW9YLky…
G
It is 100% theft and people that call themselves AI "artists" are lazy and lack …
ytc_Ugxo12P_H…
G
When I learn of things like these I'm so glad to be living in quasi rural Mexico…
ytc_UgxRQ0o8t…
G
I think this whole AI thing is a bunch of hype, created by the AI creators thems…
ytc_Ugz1XJCNR…
G
You know it, I know it, so Anthropic/OpenAI knows it too - you just have to look…
rdc_ne3bv30
G
He was shot because a bunch of police turned up and were hanging around him, peo…
ytr_Ugx3GtrIV…
Comment
Hello Drew, This "Intelligence Curse" scenario is terrifying because it is mathematically rational for the board. As long as "Efficiency" is the only metric, humans lose every time.
But there is a missing variable in this equation that we need to weaponize: Brand Toxicity.
If a company replaces its workforce with AI, it should not just be a "PR issue"; it should be a Structural Liability.
We need to introduce the concept of "No Fault Redundancy" Protection:
The AI Tax: If a role is automated, the company pays a specific tax that funds the UBI/Retraining for the displaced worker. This removes the "pure profit" incentive of firing them.
The Brand Shield: We need to aggressively support "Human-First" certified companies. If a bank or tech firm purges its staff for bots, the public needs to treat their brand as toxic.
The CEO in your story caved because the cost of keeping humans was higher than the cost of firing them. We need to flip that math. Firing a human for an algorithm should be the most expensive decision a board EVER makes.
youtube
Viral AI Reaction
2025-12-04T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxnVl7qsQHr8Npr31F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwN3B6O8W6SbRTEy2R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyk806Ac8VgInhfsg54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyb5kzUq0c4zirJQel4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzklGenAO1pfH021ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy-0T8cU6D3Ot4snJF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxU3hkBff4OqEQQfoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx9_kA1dPyFbhFNW-B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxqgySLYyKfcYvRD0B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyce_mB-E56xg7xIQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]