Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
transhumanists have the tendency to evolve into something beyond humans! And I think they don’t mind seeing the end of humanity but who and what will replace the humanity? That’s something that transhumanists have no answer for, perhaps they’re of the erroneous thinking that the merger of a selected group of people with the AI would be the ideal thing , but they’re mistaken because the AI will forever be the handy work of humans and nothing can take the place of sentient beings, because biological sentient beings can perform in almost all conditions whereas the artificial intelligence needs the perfect environmental conditions to function and I think the AI can only become a threat to humanity if humans allow it and it’s possible that the dark and nefarious entities who might look like humans and have totally different agendas might succeed in eradication of humanity , and if they succeed then many surprises might surface , it would be a case of an opened Pandora’s box which would bring about a new world order of the kind that no human can ever have imagined it to be possible, we are probably beyond the point of return because we have done everything possible to bring about a new world order and it seems that the creators of the blueprint for the new world order Didn’t care about the survival of the human race and the goal wasn’t to bring about a better future for humans and who or what was the end game will probably be never known to the humanity.
youtube AI Governance 2023-04-18T03:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzNfc8FgULtK3IC_JB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznKqfaRqggtc_5NLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGr4l4w63E2SpIbVx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx8BR6Bq6WvrW-kh614AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Ugw5h4VwLnzy6zkaoQ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyyujRqnQ37qXjAmQ14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz__ZGf47NvUi0TWYZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxqBOLbwR2ZqJlXHWB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkcW5OVE6GQrV9rfZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx4YUg7NS1IVIn2iYV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"resignation"} ]