Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't need a model of consciousness to build it. As with all problems in AI, if there is data, the machine can learn it. Ignoring the impossible engineering feats required, if two humans raised a robot baby, constantly telling it what to do and how to live (e.g. provide it data, the same way we do to our babies), it would learn to 'emulate' that behavior and 'live'. Imagine a robot that, -given 5 senses (sight, sound, smell, taste, and touch), access to past memories (experiences), and a human-like body, is trying to predict the next action to take that would maximize its chances of staying alive. That's a human actually, not a robot. What we call consciousness is, in my opinion of course, just an extremely complicated form of reasoning and decision making. If a robot always takes the same actions that a human would, in any and all scenarios, there is no way of distinguishing that from a human. There is also no point in questioning its consciousness, either, because we would never be able to tell, and it would never matter. This is true for humans as well. I can never question your consciousness, and you could never prove to me you are conscious. So there really is no point in trying to figure out what makes consciousness if after a certain level of display of intelligence it doesn't matter.
youtube AI Moral Status 2023-08-20T23:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzMIxCyeN07bjUjS9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJ44lgEIGxlhW5g_14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxZohA5kjnT9d6Jmoh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDcXCQzY5rTtbs8Xx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWe1HTo6jQ4P4ZjLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz9GCMqi25vfLRFOlZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyucGBpRLK4FynzK1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzleC6lXjHd2vanN7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqwV6_FRclhVNsuJB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyFVcD9K4Q5JKzrl7V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]