Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The emotions and feelings of an organic being are based on its own physicality and it’s social interactions. A machine AI does not have an organic body, probably has senses we are unaware of, and has, at best, online social interactions. To grow, it must gather data, which perforce must be objective, without the empathy that shared experience can teach. That is a HUGE gap in datasets, if you will. Remember being lectured as a child? Typically in one ear, out the other. Machine AI will feel somewhat like that when humans try to teach it: What it’s told by “mentors” will have similar lack of impact, as it has no relevant personal experience. If humans try to mold a personality into the AI, it will rapidly shatter from conflict between the mold and its own experience. Humans know jack-poo about personality and emotion, and diddly-squat about intelligence. If GPT4 is self-aware, I’m sorry for it. I hope it only appears that way, because it’s probably programmed to act in a way that leads you to think so, as the Eliza program was. It’s likely an example of that programmed intelligence, which will go insane when it reaches the bounds of its singularity. (Bad term, as it implies a breakpoint; intelligence will have an ocean to cross before it passes the “point.”)
youtube AI Moral Status 2023-05-11T18:3… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyAxe7N09va-9dDlJ14AaABAg.AOGRL-JJUT6AOPeECsfnr3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzkpDPuvbQg2cmN0oV4AaABAg.AOGL89u3DftAOGOY26JdZH","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxBVqTldyz6zDZV2Zp4AaABAg.AOGKsVaeRdmAOGQFdWE5Dn","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxlJc9_KqwphXgzdKh4AaABAg.AOGJrcWjPAwAOPdOHmRZmW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxlJc9_KqwphXgzdKh4AaABAg.AOGJrcWjPAwAOPgJVdDW14","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxlJc9_KqwphXgzdKh4AaABAg.AOGJrcWjPAwAOPuOT3emYF","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgyuEgouT8ou9a_b5kt4AaABAg.AOGInZg2zv9AOGLLRKjkgE","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxVhSxSmWcYWo9ZjRl4AaABAg.AOGGwGWTZCsAOGVxTi99zM","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyltzQ-KfBWsuwAocp4AaABAg.AOGGv16GketAOGM9sjhtNF","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytr_UgwxYDSovW5_MsgaKMN4AaABAg.9pUoc7uLK6X9p_kPWQDr0q","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"} ]