Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I will say there is valid use for AI chat bots but that's mainly in summarizing …
ytc_UgyN8Gwrw…
G
Walmart tried self check out try making a ai driven store see what happens when …
ytc_UgxP8YysR…
G
LOL...So a bunch of WSJ non software or math background journalists try to rever…
ytc_Ugx_o58nl…
G
Thank you so much for making these videos. Pursuing a career in computer science…
ytc_Ugyyv1PtX…
G
Time is running so fast right now that this suggestion might become useless in s…
ytc_UgxkN1cyr…
G
1:28 为了,就是国家的这个啊,研究啊发展呀,我觉得,是没有问题的。
没有问题个屁!
Man, I was so shocked after hearing …
ytc_Ugxn4xQJR…
G
Look up lobster boy. An Ai agent created to infiltrate the ai social network. Af…
ytr_UgyIOKCfW…
G
the thing that I don't like about all this situation, is that people will just s…
ytc_UgyzmQlsJ…
Comment
True, but I feel like I’m backtracking on my previous preferences. For example, I used to use automated cashiers, but not anymore. At first they were great;no queues, fast, convenient. Now they’ve become the norm, there are long lines anyway, and on top of that I have to bag everything myself. If I buy something “non standard,” I still end up waiting for a human to come over. So I’ve stopped using them and even avoid places that don’t offer a choice.The same goes for many other services: I’d rather talk to a human. Ideally, we’ll find a balance between human interaction and automation. Maybe “augmented” human workers could be the answer, with people acting as supervisors. Or perhaps ethical committees could act as overseers. And there’s also the chance that AI never reaches the level its promoters claim. Maybe we’ll hit a plateau. After all, AI still depends on humans to learn, doesn’t it? Otherwise, it’s like making a photocopy of a photocopy, the quality gradually deteriorates instead of improving.
youtube
AI Jobs
2025-09-10T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzuDqxXOp7FTT7O_yx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrmVuKe-7c2nqfXel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFOd6-6QQjWeYDzZh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_KDvn32w0NPWLn2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxgbJE6nZoMnTH1H-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzotds6n7Mvyvv_nbx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwWPcjsWfKuzLxbLtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOJzOuWZ_IKveOL2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEa454vd6pmlRf9vZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQqIo5KZChmT5RAI14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]