Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be honest, there is almost something comforting about the 3rd ending, an ending where an AI is willing to provide humans a sense of purpose and drive, while still being fundamentally greater than human in a universe where dangers abound. Yes humans essentially become pets, but it also means that our responsibility is reduced. It is like being a child again and being safeguarded by an 'adult'. Only question is can you trust that 'adult', but its hard to think you can't if an AI is willing to go so far as to fake inefficiency to give humans a sense of purpose and drive. That means that on some level, it feels like it needs humanity, else why bother. But tbh I fully expect improved AI will lead to a better understanding of the human brain and then direct augmentation of our own calculating capacity with 'hardware' improvements, and basically reducing the distance between a brain-machine interface. We won't type at the speed of hand or speech, but at the speed of thought, and thought processing itself will again be enhanced by augmentation.
youtube 2026-02-20T19:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwzVg-cotYsJ5O4gSZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvZu8SOtXV6BycygN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzmKY1eCZYhoU0-aFl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZa6aRoacGr36A41N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAY3U96noDEs4TQI94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2iLK9UARgeePQmOJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy43NY1b6PDOnYW3zF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgymJ-fCq08jTuNZtv54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxGlOLfZOgo-H5vn5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzbGjC9D8aJTzqUo-V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]