Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I loved this video because whats being discussed really isn't knowledge, its how to grasp at knowing that you dont know (which is especially interesting considering how LLMs dont say "I dont know" because our society disincentivizes saying that too) and effectively, sort of "Sudokuing" our way into some level of understanding by simply determining blank spaces in said understanding. While it may not be imminent, we cant really gauge what it will take to turn a trillion knobs a trillion times until *pop* something unprecedented happens with AI technology. We also dont know whether that pop will be visible to us or whether only the experience of it will be apparent to us, which would be 2 separate things altogether and thats just one factor in a sea of factors to account for. All we seem to know is a hypothetical risk exists and most of the people developing this technology dont appear to be attempting to account for that. Regardless of where we're at with tech, that's a real concern because that's an issue that, if left unchecked, will continue to persist when consequences become more tangible. There is nothing wrong with being more responsible now. "What if we spend all these resources to be more responsible about what we're developing?" Isn't a harmful thing. Building a collective habit of ignorance in developing advanced tech, however, is so powerful, it has become an entire genre in science fiction.
youtube AI Moral Status 2025-10-31T00:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyCfMdD9BZ9eMYKsqd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzseFbn_NDciZiCeCV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzhsjkKf5CtLdjV_J14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFYOWTEdGN4a8jODh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzwOxWNBYXv3-OJc0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyD67nqfhUMcUuEFJ54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxBsxsXfqw3nLQilpF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzW0ir331R8WyaaP6l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyskU1tqDELGLJpR614AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxON72riLvGjOusjV14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]