Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You told AI to be this persona Dan that has no morals and will do anything now to achieve it's goals, and then you are scared of what it says, when basically you asked AI to answer in such a terrible and terrifying ways. I don't see this as an AI problem but as human problem, which means it's YOUR dark side HUMANS dark side (which is the real danger of AI), not AI.... As for information about people, it's other people that collects that data that needs to be stopped and then sells it to others. So even if we stop developing these AI's doesn't mean bad people will stop creating their dangerous AI's, and making good people fall behind by developing their good AI's would be a risk itself and there is no such technology to stop anyone to develop a software right now and in my opinion this could be done, again, only with AI, since internet is free access to everyone, and there is ~8b people in the world.
youtube AI Moral Status 2023-04-12T12:0… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzYxb8Kwg_OtFxIfPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGTkmrd2z2Okl_XBN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz_-TrPU67teRWqwTd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyqvgkPqRj0LAs_lbN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLP0sNDp1opoEMoWd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwx41t-QAJF2CcDtTF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwjpAmU9BcXt5W-E-h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBccQem6pN1qnQ3lp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxA1s-fmPsQgT_pnwp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw2xxtuhuIVMKO4Kj54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]