Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s clear who causes more crimes then others, don’t blame AI for also seeing th…
ytc_UgxGmyRhK…
G
If u actually learn to use ai, and not just the consumer/marketing/investor BS, …
ytc_UgxLdL7ws…
G
A.I. has given people with no talent and no skill the ability to convincingly fa…
ytc_Ugz2g-Y9N…
G
That’s what the ATS does. Automatically trashes resumes that do not have the key…
rdc_n6qynxr
G
Agree so much. Intelligence requires consciousness. Period. It is very tiring …
ytc_UgwsD_6W8…
G
AI "Art" is boring because we connect to people when see them achieve something …
ytc_UgyIjNL-f…
G
I'd rather have the robot looking robots. The humanised ones just look creepy, l…
ytc_UgwaXAx47…
G
What is going to happen is they actually think they will always be able to be in…
ytc_Ugwc3d23j…
Comment
I dont agree with this guys assumptions at all. For all of this to work LLMs have to be the future of AGI, that AGI is highly possible if not inevitable, and that said AGI would value individual power and status over helping others. The first two I dont believe at all and the third feels more like the his personal views on humanity than anything else. I also fundamentally disagree with his belief that we should focus on preventing a superintellegent AI from taking over, a future scenario with no evidence it will happen, instead of focusing on the real world issues that AI is causing right now. To me that belief comes from privilege that they dont affect him, and so we dont need to focus on those issues.
youtube
AI Moral Status
2025-10-31T07:5…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6vZjGSGg4CrL-nnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbtNzVpAcjVuqkKRJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzkAZDOJhmoC8Hinhh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyfiR1311E7PqIM26J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyt13y3qcMLhP5Gm6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz48ZOMgXd_uPzTEFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyYz43cuN5TRi6_PMN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwaNpbwGEXfFOnqAXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5Ge7eWsLI7MIRADV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwZW5NeKjUA4OAeTLR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]