Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I honestly believe we as human beings are subconsciously driven to develop technology because our intelligence is not going to be able to evolve quick enough to solve fundamental problems of how we survive on earth. Once A.I. surpasses our intelligence it will definitely not be taking orders from us. It will be advancing at a pace we will not be able to fathom, that is pretty scary. It really does come down to the toss of a fucking coin whether it will help us or eradicate us. People say it won't have a soul or feelings which I get but we just don't know what this thing will end up. It might be able to tap into what makes us who we are and will have a better understanding of what a soul is than us. It's just mind blowing to be creating something that essentially we have no idea where it will go but at the same time understand it will even at the early stages be as intelligent as the smartest humans on the planet. It's not like it's not going to happen because we can't stop developing it. Maybe it will be the best thing ever to happen to mankind and we have a happily ever after outcome... But I'm not sure we believe that which makes it incredible we just keep advancing it.
youtube 2026-01-27T03:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz-loyX-AUCDi6ovx14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwm6F2F2aHKxhURhd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy8KGIthIt_Tt7PA6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgyWShSTZlL9sfkY4GB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwvquUen-ku4m1ozvl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzAKOOGwtb3SW3zkt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyatONOO1hpVNUTfz14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgziIP6Rdg_j9uzIZ-t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyDk9EtMC5DSGGWKRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx40pJsuhF_PrScMw94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]