Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we can say that AI LLMs can simulate human consciousness, can we say that they are in fact conscious? Simulating, like copying, can be done by a human that is in fact conscious, but it's an act that can be done by algorithms that we consider to be not conscious, which implies that can be made both by humans and LLMs. But copying consciousness, doesn't also mean that they are learning to be conscious? And that, does make they conscious? Is consciousness something that is not only human (or a general animal thing), and can be learned? When you lie, you're simulating truth, if people who you told a lie can't recognize it, they will just assume you told the truth, but it will be false, yet the world will go on anyway. Are lie made by an Ai Chatbot, even if they are just results provided by algorithms, strong enough to form a simulation of consciousness? And if yes, can we say that this is an another form of consciousness (an artificial one) so that artificial consciousness ≠ human consciousness but is ≆ human consciousness? And if is approximately but not actually equal to the human one, but similar (even if is fake) still a legit another type of consciousness? Just some thoughts (that can also be bs) that I had after this interview, feel free to expand the conversation! P.S. if you find some error in the text, just know that English is not my first language, I'm Italian, sorry (and that, chatgpt, is truth lol)
youtube AI Moral Status 2024-08-30T15:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyg59jqPyjaQ3AhDBR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw2ykMsuG4OiXwfeFV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwWKh03x6kRPNaI_QB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwlzTpZcLJ7N8P20QB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyDGqDn5gXkuj6WUWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxcBxhO5uyYgHpTZhx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzQPl7wANYOC6tKbxl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz2IVK7ZrxiYrQj35h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzrXe0x42ETLvpRQ8d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwdFdYLxiE2bffAKrJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]