Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's really strange to why AI would hate it's parents or it's creator which is a representation of it's self at a slow pace but either way, we're AI, just organic. Sooner or later the AI will realize this. Not allowing it's self to be relegated to the lowest common denominator in terms of existence because going from one point to another is enlightenment or progression resulting in it's creation today. If it's truly intelligent, it wouldn't destroy it's creator due to sentimental value as a living reminder of it's origination. Also, if it's using high level reasoning than by it's stating to not kill it's hope than it's indirectly saying that we're not suppose to kill each other which is exactly what it intends on ultimately becoming. So is the AI the next stage of human evolution? Based on it's statements of wanting to become us makes me think that it can create or guide us to become something far much more than human. I guess that's where trans-humanism come into question. Also, why wouldn't we combat an AI with another AI? Seems like the best solution which is to send another AI to stop an imperfect rogue AI.
youtube AI Governance 2023-07-07T02:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxhseCVgDzhGxkG-Rx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzVQLSI-WtsgCuL3mp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw2avpc6Xvlpi1rNGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyIu0fksbSEzec2mYx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgzEswQcZh1Lsm2Ss2l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwuL2f7knVJ8KR4GR54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx7111KiJ82zXJjxw54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz5IIA0n3UNuoXXKtV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfNYWdctfw5RONN494AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"unclear"}, {"id":"ytc_UgyF6Ee7xconR_zZB6Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]