Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, we COULD stop these things from mimicing human behavior like this. You know, all the good stuff people claim AI can do (help with research, being a search algorythm) could work JUST as well if the AIs were trained to talk a lot more dull, dry and matter-of-factly. And less imitating human speech. But of course… can‘t really make users addicted to these AIs if they were just dry search-engines, can you? And that‘s what these tech-firms want above all else: keep you hooked, have you addicted to their new drug.
youtube AI Moral Status 2025-09-29T23:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxB7kBbQQXazZyQnUN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwxm3o-THR542lJwAx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyahEmzqgqZJ8cMjxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw_dOMSroNiz7Oog0l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxdyJFr5ki1EVwy_VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZcvVI7iXIltzs2OR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwk0WluoiRRVS6R9bB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyQo-CE3AWqr-J7aS54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwbXTvBzhBgwCIzM8J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyvNmqRbj1ditM3kBt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]