Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We are creating a successor species and we humans are playing a bigger role in it then we realize, "like in the meanwhile we have crispr, which is a bunch of DNA sequences used for DNA splicing", "whitch is genome editing, if im correct, So to simply put it like this, every body that's working on crispr/genetic engineering, ASI, cloning, humanoid robotics, cybernetics, BCI, neural networks and machine learning and all these things, they will inevitably lead us down a path of abundancies and possibilities we can't possible began to imagine. "How do I know this"?, well if you think about it, " elon's BCI has already been approved by the FDA", and AGI/ASI is right around the corner, we already can clone humans, "like, it's not out of the question that we won't be cloning and genetically engineering people to be smarter, faster, stronger, super athletic and overall more capable than the average person" ,"Now this sounds like cybernetics, now cybernetics is when disempared people go from being broken to being cyborneticly inhanced, now remember cybernetic organisms are individuals who's minds and bodys has been enhanced, via cyborgnetics. So it's going to be cybernetic organisms, ASI powered humanoid robots and regular people except with the highest of IQ's working and learning from each other to advance the future.
youtube AI Governance 2024-04-13T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwuNuKZZGA2dA09QbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzVFHI6uJ2893xAfwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwnLx9XGlatQ1sU8P14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyPYf0HTFuiugS2fCJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgySJPtGeITMDwOeKGN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgynJ4Zd92CWjlCpWOx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzm11aXQmhdFPoNF194AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHb9LTskzQyPGhBBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwhY51C0gJUm8Mxmw14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyqgJe5yn1ThAoOE2B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]