Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
9:23 and 28:24 Have to disagree with this point. LLMs have been incorrectly portrayed as some uncontrollable thing that grows on its own, a perspective shaped by sci-fi and not aligned with what’s actually happening in the research. One of the biggest misconceptions I previously had about the current approach to LLM training is that “AI grows on its own,” which is an oversimplified assumption and is solely based on the pre-training phase. The ways LLMs are built and improved today is steerable, and what LLMs get good at is pretty intentional. Whether or not researchers are improving these models for the right capabilties is another ethical question. There are certainly risks with deploying AI models, but these risks (hallucination, sycophancy, behavioral changes and self-awareness) are recognized as such and are actively being addressed by published research. I think the discourse needs to be more balanced and include perspectives by people creating these models. Yes, there are societal and economic risks. The trend of CEOs slowing the growth of entry-level roles in favor of automation isn’t sustainable and is already backfiring. High schoolers aren’t learning how to write essays anymore. But AI has also been genuinely useful in other areas, especially in scientific research, and I think discussions on the societal effect of LLMs should be led with more nuance.
youtube AI Moral Status 2025-11-22T11:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyBnx5CxIB82ljFO1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwW03VC3ed2y9oK7gZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwn25wAFkJnqENhYT54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwC6zBKJni2YOF4I1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwpPkkyw0y8NgioAdN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx1mxNKiB8GX_mGHEp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhGOWDsh16fzeUiC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxnaKHF1_ABsdOJPcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzyB0hHhiI8cUwx-hV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxUHgo7TyISEkd9SNJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]