Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My brain goes into a knot thinking about AI. I understand the concept/reality that it could 'take over'... but if 'it' gets rid of humans, what is there to take over? If AI finds a way to 'take over' the human brain... then there is a problem. I mean, it's a problem any way you look at it., but that would be a BIG one. >>> The incidents of AI taking on human emotions has to be a mere emulation of human emotions. Still, that would make it do evil things. But if it's intelligent enough, it should realize it probably needs us to perpetuate its own existence. It could possibly start a 'race' of robots, but it should know that would not be the same as having actual humans. It would be mundane. If it's emulating us, it would probably want that which it is emulating to be around. >>> If AI is merely emulating human emotions, hopefully its 'intelligence' aspect (also emulated) would over-ride the negatively emotional aspect. I mean, that's where we go wrong in our own existence. I think a big mistake was feeding AI with ALL of the information from the internet. Much of that is emotionally self-serving. I wonder if we can delete that part? >>> Ultimately, AI is emulating humans and that will drive it into self-serving actions. The only hope is that it can 'catch itself', and keep itself from doing horrendous negative things. Even we 'catch ourselves'... sometimes.
youtube AI Governance 2024-04-01T09:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzC_9shsfHCg5Uf4dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwDZzAc4bl4fmonWGB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwkxxDRLhtk58mCQsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6yt3y1wOtXbpCuPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxvrvs2c8C7ik3aERF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBrGR7J9Va1bBFwOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz-pXcynnjwfwegk0x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwTEUuBu1DZFlqJawZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzwLCPBPeONu4qgcJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7J7xCIB9kg8GIXNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]