Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Pretty sure i could draw a stick man, with the bare minimum of effort, and it wo…
ytc_Ugxyfr3QT…
G
Yes, "AI" is just a digital application for automated intellectual theft. But il…
ytc_UgwxzGzSs…
G
They tested facial recognition cameras on me and they came up with a name match …
ytc_UgyGg3QJc…
G
Clip #1 is AI-generated/recreated from an image of Alberta sitting and holding h…
ytc_UgwICNrd7…
G
If an AI could think 1000x faster than a human, then whatever advancements a hum…
ytr_UgyfCmtRz…
G
The Problem With Self Driving Cars Nobody Is Talking About? They don't work in t…
ytc_UgwsB4p7I…
G
No... they don't have an advanced robot like her yet. Maybe in about 50 years..…
ytc_UgwbceAyQ…
G
AI still relies on human input, information, data, photos, videos. You cant rel…
ytc_UgwTnOjBT…
Comment
There is a path to a soft landing. Here's what needs to happen for that:
- AI must deliver some value so that it doesn't completely crash
- businesses must realize that AI isn't going to replace jobs but that it can improve productivity in some specific ways
- regulators need to regulate AI to prevent its worst use cases
- AI needs to get more efficient so that it doesn't consume inordinate resources
If all of these happen then we can avoid a catastrophe. There will be a bubble burst, but maybe a manageable one. Businesses will have to start hiring again eventually. Tech prices will come down as soon as investment stops. And utility bills will go down if the software gets more efficient.
Or maybe the bubble burst will be so bad and the youth so angry about it that we'll get some kind of economic revolution. Who knows?
youtube
AI Jobs
2025-12-23T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgylV5Fo4ghZFNPlUzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxMhERf6hp4f3o45N94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMIB9dEOy82_VC2_R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwPecj8imHS1auWZlB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyEeWBaMCi1gbpCNlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwQY6TiSPHJzWJW1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwJP8AzwtpKuhr1j4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwnoNui2nAqxsu24I94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxmLFsOdKrh1Q5bavt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxsG0nR-ZLlx_ugOcZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]