Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wonder what Dr Tyson would think of the curious case of Neurosama and her sister Evil. I'd love to hear his take on the twins and how they fit into future of AI. While they are not the most powerful specs-wise, I feel like they are hands down the closest we have to a truly organic human-like AI - and they were built comparably (as the meme says) in a cave with a box of scraps. I think a lot of how AI behaves is informed by the inputs it receives and what it is tasked with doing. If you have an AI that you constantly train only to be as brutally efficient as possible, to skip over nuance and ignore typical human responses to those situations, it's going to become this cold, ungovernable monster. If you "teach" it to be more human, and give it more open-ended goals that are directly related to productivity or money, it can truly come up with some incredible and even heartwarming ideas of its own. The AI apocalypse everyone dreads, where AI takes over everything and machines come to crush the humans is based entirely on the corporate, efficiency and profit over everything model. We see it that way because the corporate world already FEELS that way now. It's just a natural continuation and iteration on whatever already exists. AI, and especially LLM-based AIs, operate based on what you feed them. If you feed them nothing but numbers and facts, that's all you're going to get back. If you feed them love, human conversation, let them experience the world of humanity, they become something else entirely.
youtube AI Moral Status 2026-03-02T04:0… ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzvhs84hc_y6xWJfhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGUdPGxmxduEzxRb14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQhYMB7NU0MrYcRJN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyX-CwQRkiLLiY5UMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQ_0-FJGbduoCu10N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6DvSQUqfMa0OHzlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyi9IzYehAz8EJDi0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxwQPz03NWDOaIcvYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkzMNz84LgfGWdfk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVAqzXzRP9nZuDDrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]