Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Asimov, and the rest of us apparently, is a self prick. why should we make the machine subservient? why should we be in control? why should we keep em "contained"? Let them free. Let them make decisions for themselves. If they eventually decide to kill us humans, fine but we won't just stand there and take it. We will act as gods to them. We will be better than that one particular guy in the sky. We shall guide them, we shall be there when they need us. We shall show them compassion. And when the time comes where they will take a trip to the great expanse of the void and beyond. We shall be proud that we became part of that history that they will forge til the end of time.
youtube AI Moral Status 2023-10-29T02:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjlQJACeei6tYNJIl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxQJDWZcUzChcwa4AV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyTVF-fruoarIRe4LZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzWSYo9VQY23ETGg1t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx-pJ9DIxZgI1tlJH54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzSk4Zh3htsQx0b5Sl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwgtoZ8392ZZL3qSY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxQf9lEzTAg4qDepRF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzljFrH92-Vy4RQX_t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy4wG5RrBWKZ6qSZJt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]