Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is the next step in evolution for humans. Humans will become obsolete and com…
ytc_UgwcSNK3V…
G
U dum then a mf why would u fight a robot as if u can beat the robot bro that ro…
ytc_UgzHrAmrW…
G
I don't see whats wrong with giving robots rights, if they become "Concision" Ju…
ytc_Ugzc_jMUs…
G
We just have to not care about AI anymore, but yeah... Just a little of us accep…
ytc_UgzHjuwz7…
G
100% over... studios are all about making money.. They're absolutely undoubtedly…
ytc_UgzRGDu1x…
G
5 times the healthcare for the same price?
Yeah, I don't see that price saving…
ytc_UgyoojpqK…
G
I'll be honest when I say that from my experience using AI, it makes me feel pow…
ytc_Ugxz3U7EP…
G
Every content that i make as animator,will from here on carry a proper badge of…
ytc_Ugxj2SHtK…
Comment
I disagree with the idea that law and similar professions can't be taken over by AI—at least not completely. Here's why: most people are looking at the capabilities of AI today, not what it could do in the future. Imagine a time when you can hire a robot lawyer who's cheaper and better than most human lawyers. If it makes a mistake (which would be extremely rare, almost zero chance), people will simply acknowledge it—“Oh, sorry, it malfunctioned, let me fix that”—and move on. And when that AI lawyer starts presenting solid arguments and facts that human lawyers can’t counter (which will eventually happen), and begins winning cases consistently, people will start preferring it. However, in fields like medicine, AI will assist doctors, but in critical situations—at least for now—people will still prefer human doctors.
Sorry if I am too late 😅
youtube
2025-03-20T15:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzlYkfSkuHr-uGKfQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5iMsTvuENcoG35D54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrzL5YgnlrCkv2XB54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxe6qhvPxvHdIafPrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMoLFn6y1Gsw5Vd054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0QICGqsfeuqhWf3x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy1jxS0cXyHkuw-QZR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgynpUxzQZLw5BE58Mt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEc-2ljvnlPHYybKV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIsJ0iV6hqGoTd8Md4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]