Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can see the scientist Roman who is talking about ethics in AI topics, which make sense and is noble. And I see the human Roman, who is “in the fields”, with his “theory” of simulation… I’m curious, why should someone care about ethics, if they “know“ that this is a “simulation”?!🙄🤔 There are lot of incongruencies in his discourse… And, by the way, there are people who want to die! His question “who wants to die?”, which seemed rhetorical, but he answered it, out of his own structure of explaining reality, (which is just individual): “No one does!”… his answer should seem general, and thus, just by generalization, a valid argument sustaining the “live forever” desire we should all have…🙄 …My granny died at 97 years old, she died because she wanted! She had a beautiful life, was healthy till the end, said she is not interested in life anymore, all her friends had died, she felt complete with what she’d lived and stopped eating, and within a week she was gone. … as there is one person, the “general rule” is not valid anymore… I know even more examples, as I’ve trained elderly in generation houses. So, I think we should be also very careful with these artefact theories of “simulations” or whatever “conspiracy” theories which are no different from the religious fundamentalist theories… religion is not about telling the truth about life or whatever! Religions are metaphorical and symbolical stories about how to PRESERVE LIFE, without understanding its origins, and not really comprehending our own nature, and maybe nature as a whole …, it might be another personal opinion, but check: does it spread fear or peace?
youtube AI Governance 2026-03-08T09:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwYeYwdWAfUNlC5D6p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugywah8ooRNtrwA4iNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyI3NRz5SbmDLYq8zp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJmd0cfalhZ0_qnP54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIjGDmQJyFR01iuxN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz4r2dGf8B-scZ6HH54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwNCyYnTqPPGNuIV5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxzLcpD5RS0PEB22Fp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxW50kSiCJxTGw5pAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzBpUhsszeyEGyc-Ex4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]