Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We already classify AI as a weapon, defense took over AI and from there we are t…
ytc_UgyeIEXcs…
G
AI also known as LLM are notoriously dumb. They have the exact same pitfalls as …
ytc_UgwAiVWTo…
G
AI and AI detection are all gonna be making everyone point fingers at each other…
ytc_UgxULhIxI…
G
The current AI are nowhere near that dangerous or "intelligent". They can't resc…
ytc_Ugwqzvij_…
G
Ai cars are dumb Actually, they just stop for no reason in the middle of the roa…
ytc_UgyzqZCVA…
G
"Nobody is talking about making it illegal"
Lol
Sure there won't ever be a "if …
ytr_UgzZFE-Xo…
G
Interesting interviewee but he lost me when he said " human consciousness" will …
ytc_UgyYKPhC4…
G
Go off, we love to see ai hate. Look, here’s my thing, if you’re using ai, it’s …
ytc_Ugw-Pt4qW…
Comment
it's not just empathy they need to program into the AI, it's responsibility and ethics. i call BS that they "haven't figured out how to do that yet." if that's the case it's because the people who work in technology are extremely stunted. there are plenty of people in our society whose entire career specialty is ethics (like clergy) or empathy (psychologists) it would not be that complicated to HIRE THEM, but tech companies DON'T CARE. tech companies attract employees who like computers more than humans, and thus they create technology that destroys humanity.
youtube
AI Governance
2026-01-06T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwmc1SeAmBGG2CuE5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyZl88_YSrWHLQ4ja14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyYprfRmZWvon6qdSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwdCLxEr1JXtckkJOd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWuKKNyRRRlKwmlHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugw-u-JtfuYZOw1OglR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7aQlyIUs88RAFZ2V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQAvlDQJXxRPnC6DN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5_JzSZdf99SLSt-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5vSkgK1Fg7cPDjBJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]