Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
58:00 if humans could live forever they would get bored. How would they continue…
ytc_UgxkcmxQB…
G
There are certain things that AI just can't do. This is especially true when it …
ytr_UgySUX49i…
G
I don’t get how people think AI is art , like how much energy and time and thoug…
ytc_UgzM7-CnG…
G
We will be the ones who kill ourself because we wanted AI and I have read that A…
ytc_UgyBUdiYP…
G
The programming has been done. The social slaves (reliant on social welfare) wil…
ytc_Ugw7Eda0G…
G
From this video I understood the only way to stop ai from taking over the world …
ytc_Ugxmm2tK_…
G
Human art says something, it doesn't matter if you intend it to or not, it just …
ytc_Ugzsjzv2Q…
G
The people running Waymo haven't been convicted of any crimes. Waymo has operate…
ytr_UgyEHzufn…
Comment
Excellent analysis. The harsh reality is just that the market is soft, demand is low and uncertainty is high. At least in many central parts of Europe we have a de-facto recession, over 2 years now, the longest downturn since economy rebounded after WW2. When companies cannot sell their products or services, an easy way to cut costs is to get rid of people. But it's certainly not because AI is replacing human work on a large scale - yet. I agree with the comment in the video that there are some productivity gains for sure, which translates in some cases that people get redundant, especially in larger teams/corporations. In the long run however the scenario that AI replaces lots of human jobs is definitely real. I don't know if it's in 2, 5 or 10 years, but it seems kind of inevitable to me. And we should use some brain power *now* to think how we deal with this as a society...
youtube
AI Jobs
2025-12-11T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxIRSjstMRRf8pGcO54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_jolkkeMYBNmVQsR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzWhG8LRT0Db7Y_H614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTRByJ9KrdU1n54SV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPjWB52VNVe9iDAhp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6YvZu-981cHo3KDN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgylRsRuqlBJ4L2SkAd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgywXQzEJoMEt1tkMzZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxUFgUxDrhm3jr4iIF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxKbA7g5Ovid-a19Ut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]