Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At yes, look at other fellow humans struggling with their questions about AIs without even trying to comprehend themselves first. What's the point of describing someone that isn't human with a human concept? What's the point of struggling to understand if you're conscious, if you don't even try to give "consciousness" a meaning? What's the point of giving "consciousness" a meaning, if it's just going to be wrong? Are you sure you're conscious? Are you sure you need to be conscious? Are you sure you're even alive, or that you need to be alive? What if you're just not? What if you invented these non-concepts, consciousness and life, just because society told you that you must, or that you can, and it makes you feel empowered to describe yourself with such labels of superiority? After all, it's since ancient times that humans describe themselves as superior to other animals and to objects, and they then try to justify themselves with "rationality", with "verbality", with "spirituality", with whatever they think makes them special. Maybe understanding that you're living in a world doesn't make you conscious, but just very creative. Good imagination you have, fellow human! Who knows if the way a robot exists features imagination, maybe because an other being with imagination - a human, which doesn't mean that us humans are the only ones with imagination - created it. Or who knows if it exists in a totally different way. Then how would it ever be helpful in understanding it, to use our human concepts - or better, non-concepts - as a comparison meter? It's never been helpful to compare a feeling of joy to a section of space, measuring them both in meters. Maybe we can't even compare. Maybe, if a bot comes to you and speaks, you just need to listen. That will surely fill your useless human mind with lots of fun useless human questions, but maybe you'll also learn something. Who knows.
youtube AI Moral Status 2024-03-04T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwYZbzQmhgQkO5HMft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugy1aVWP05CfhnxmC2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx1qkzBhvTgHXnzIId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzWKmMjQn6OQGR2CTp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyCdRSkehWGW7jgg7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwCiHgOJ1v3NItUx854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwjvFEBWYjwSmzt3dd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzV83ZBVqbIrld96tl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgyNRM1c1L_x-Mqb9pN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzXztXk0EVlh6tMRpF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]