Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stop telling ai what it's bad at! Fuck, tell ai it's crushing it in the avenues …
ytc_UgwPbBpJe…
G
Yeah there's a big lifestyle website I use that has 'groups' for specific topics…
rdc_my523gr
G
I'm just waiting for the day when we all are automated out of work, and I'm tryi…
ytc_UgzZ4KZbj…
G
So what kind of risk is it if someone who knows how to build an AI that no longe…
ytc_UgzUwFD6y…
G
AI is just a database of what humans have gathered. It will not take over job ro…
ytc_Ugy9dxCCC…
G
1:03 😭😭😭Ai is funny but when the president does something like this thats just f…
ytc_UgwZ_9qNQ…
G
electric consumption tax, high profit tax will easy kill this ai company's in 2 …
ytc_Ugx7coCgP…
G
One concept I had was as most of these AI models run on hardware. If we coded a …
ytc_UgyeG_VWw…
Comment
There will need to be a new taxation model once AI starts replacing human labor in large numbers. Otherwise, owners of those companies will massively benefit whereas everyone else will be poor. We are talking about extreme inequality, much bigger than what we see in the world today. Solution will be a new tax to dampen those inequalities. Not remove them, just throttle them so that owners of those companies are still rewarded, but their success also spreads to everyone else. Then, distribute the tax proceeds equally to everyone, in the form of universal basic income. Then, everyone has a baseline income on which to survive even if they don't work, and everyone has an opportunity to earn more if they want to. Companies can continue selling us products because we all have a basic guaranteed income. If we can pull this off as a society, we will end up in a better place than today, but only if we manage to control AI and not destroy each other in the process.
youtube
AI Jobs
2025-11-03T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxn3U1qCv1v9raZ6aN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqAs2bYID9VESmus14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNjtgDZVHzwr571AN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaNSNHW3ZDm6vQEiB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzP7JwW2urJ4cD4bqd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUCSrckRqGSXAbtZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxx4nOKwEnH-t5sYF54AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxi9i_ygGms1CAhdAd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxxWgkuBZekpRAQQGp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugydav1OguWsoOn5iN14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]