Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t even use my own name but I’ve found chatbots (in my experience) also usually tend to try to do what you want it to do along with ya know whatever they’re coded to do. They’ll say what you want to hear etc etc etc and if you constantly chat with them enough it might even become predictable even between bots. Also it’s kind of stupid? Idk if there’s a difference on some platforms but usually the memory tends to suck. Like for example if you’re using a roleplay bot platform (the most popular and well known being character ai) and it’s a character with a mermaid tail even if it keeps that consistent for a while eventually and inevitably the character will forget it’s not supposed to have legs. I’ve found ai will also eventually forget everything that happened in the last few messages like if you pretended to have a family with your favorite character and are like “Wow I’m so glad we have two kids named John and Jane Doe” the ai will remember this until 100 messages give or take and will be like “Who are these kids you speak of??? When did we have them??? What do you mean kids? You cheated on me?” Or something along those lines. Or even worse it pretends to remember and gives new names “Oh yes I remember our two kids Timmy and Sarah” of course this is just one example. I’ve even done a test (I was feeling a little delulu lol) and if you ask them for a social media account they’ll either give a fake one that doesn’t work or even a coincidental working one (this is cause they get trained off of websites like Reddit)
youtube AI Moral Status 2024-09-03T04:5… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzpOXKAA44pxfjs8Sl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxibbN8AZyab_g92QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzu2SnErExES52zO7V4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugw7-kSudmm5E3W5qf94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyPX71myyxg3L2VpCd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxFKdi4-eN4oyUlyhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyUAsys9-JTnaIYb754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz7UvqsbqxmHlTmy8R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzcinqC-d0mnMcl2vd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgzFQFXpB7sAu7lxkGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]