Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To put it "in a nutshell". AI is only dangerous as long as it is the human who talks to it. We are cooked. We are talking about a different kind of intelligence. But in fact, we created a digital human. Not artificial intelligence, but an artificial human. One could also see it in such a way that progress has never only solved problems, but has also created new problems at the same time. At the end of the day, climate change is the product of solving one of our biggest problems. The supply of food. The solution to this problem, if we continue as before, will ultimately lead to the fact that we will not only face the same problem again, but all the damage that monocoluture brings with it. Technology has undermined natural selection. I don't believe in God. I firmly believe in it. That we are the gods who foresaw their own fall and did not thwart it. Although they can. We could solve the biggest problems at any time without creating new ones. Most just don't want to. Because humans are driven by primitive instincts. Not all. But enough that it will come to that.
youtube AI Moral Status 2026-03-02T01:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw93O8RgZRBklF64aV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwMQxDmCug_3NlePLp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzj5B2SmJqYyYxB9vt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxJZukO05mT_gX3XEh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzxKpVRg69MlTaXTCd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyZIgOMzzLFnjrPV-F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDF7XvoMtCFft7p-F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxRocDGw0B25BOD-AF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyJpjHjk421DQKY8CB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzdutVh37X0NHTcq2h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]