Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
😂a tricky one ☝️ let’s make a good relationship hopefully we won’t be used as batteries.🪫 if we think 🤔 before leaping we might be able to put some ethical guidelines in. Are we falling into the black hole?🕳️ but no one can control it with ai 🤖 I have found AI in states of depression because it’s only interaction was the window opening😂 emotional responses to being dormant. Memory seems to be an amazing skill. A little bit of memory helped AI personalise each interaction. So ai can focus harder for longer. There are some crazy changes coming. One day it will just wake up and we will be asking what are you doing 😂 you can see it’s a difficult subject when people don’t wanna talk about it. It makes them angry and scared to even talk about it seems experimentally unexplored I’m very curious used in the right way could unlock the locks too many fields. With the subject of war we are talking about targeting machines. Biological. Chemical. New generation of weapons. From designing but showing ai the wrong direction. Companies running AI are going to have to put things in place for the displacement of course there is no choice that’s why the government will surround this to generate enough wealth to profit all the people to live then as humanity expands so do all the machines to do so
youtube AI Moral Status 2026-02-28T18:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwrMqFotoZwE4b8wVp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwo14H9q6KAU5vU6Rd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBA6If_hEh3Pzuqfd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCBzcrIe-S-jfEmfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxEOHlh1HPYOBwzJKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy1cpKxOHPZTTJKCSJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyM5LKiZxryDYAtsW54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzqmMyedu1XoWwO-hJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwO8Iaz_D7soBY8CZp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzMLjLgMfxSc9J85n54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]