Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LAW = Light Anti-Tank Weapon
LAWS = Lethal Autonomous Weapon System
No chance o…
ytc_UgwfXsu0h…
G
Just wonder, halfway in vid you mentioned AI researching AI; you'd be amiss if y…
ytc_Ugwr0iQXf…
G
It’s all fun and games to minimize the seriousness of this until someone uses AI…
ytc_Ugw0045HR…
G
How can there be any doubt about the dystopia AI will bring given our track reco…
ytc_UgwNeBZhF…
G
Well to the "It's very hard to build an AI with XYZ" line of thought: It might e…
ytc_UgydqfQIC…
G
Simp for the ai ceo not to fire you by being its spy among the humans to destroy…
ytc_Ugwcpbxpa…
G
How would we make AI safer? Less data input? Ethical restrictions? While we huma…
ytc_Ugz4DhdYX…
G
@DroneZ-oN Theoretically, needing humans is a very temporary issue for an ASI. A…
ytr_UgxXIXg68…
Comment
9:23 and 28:24 Have to disagree with this point. LLMs have been incorrectly portrayed as some uncontrollable thing that grows on its own, a perspective shaped by sci-fi and not aligned with what’s actually happening in the research. One of the biggest misconceptions I previously had about the current approach to LLM training is that “AI grows on its own,” which is an oversimplified assumption and is solely based on the pre-training phase. The ways LLMs are built and improved today is steerable, and what LLMs get good at is pretty intentional. Whether or not researchers are improving these models for the right capabilties is another ethical question.
There are certainly risks with deploying AI models, but these risks (hallucination, sycophancy, behavioral changes and self-awareness) are recognized as such and are actively being addressed by published research.
I think the discourse needs to be more balanced and include perspectives by people creating these models. Yes, there are societal and economic risks. The trend of CEOs slowing the growth of entry-level roles in favor of automation isn’t sustainable and is already backfiring. High schoolers aren’t learning how to write essays anymore. But AI has also been genuinely useful in other areas, especially in scientific research, and I think discussions on the societal effect of LLMs should be led with more nuance.
youtube
AI Moral Status
2025-11-22T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyBnx5CxIB82ljFO1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwW03VC3ed2y9oK7gZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwn25wAFkJnqENhYT54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwC6zBKJni2YOF4I1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpPkkyw0y8NgioAdN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1mxNKiB8GX_mGHEp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhGOWDsh16fzeUiC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxnaKHF1_ABsdOJPcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzyB0hHhiI8cUwx-hV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxUHgo7TyISEkd9SNJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]