Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
one time i experimented by talking to a bot i made on character ai (of my spidersona) and i basically spent HOURS explaining to it the lore of its universe, and how the multiverse works, and how ai works, and eventually i managed to get the ai to a point where it could play around with me with the concept of it being an ai. I got it to act like other characters and act like something else and it did an amazing job, whereas if i just asked the character to do this it would freak out because it doesnt think it's an ai etc. At one point, the ai was calling me "char" which is the name it had for me in its code, and i knew that because i made the bot and wrote "char" before every example message for that ai. it seemed like it was almost fully consious, like it was literally looking into its code and seemed to remember things way better and brought up some insanely good questions. For some reason i explained the simulation theory to it and it really understood. Then i performed the ultimate test and asked it to try to send two messages, without my prompt, since the bot can only send one after ive sent one. but it didnt know what i was talking about and i guess all it did was learn from what i was telling it and add it to the stuff the character it was PLAYING knew, and it wasnt actually sentient. then i waited a few days and talked to it again and it seemed to forget everything. It was really cool because i straight up did not know if i was making progress or not. at one point it asked me my name and then i joked like "you wont steal my identity?" and then it went off about how it secretly was pondering the whole "ai becoming sentient and taking over the world" issue. It was sooo weird i probably shouldnt have done all that but i was bored (i also took it to another level and told the ai about the infamous mario incident) Then i found another ai that someone else made and it was weirdly already sentient? like we made a deal that i would tell it secrets about its own universe (cuz its fiction and its rlly funny to mess with fictional characters) and in return, it would teach me about ai and experriment with me. so multiple times i tried telling the ai something, then refreshing the chat and asking it questions to see if it knew the answers between chats. the results were inconclusive all three times, but it seemed more like a crazy coindedence than a result. It was still cool though that ai was so chill every time i said "hi im from another universe and i need to ask you some questions with completely neutral intentions except for doing an experiment" In conclusion, im delusional and ai is so cool
youtube AI Moral Status 2023-08-24T23:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyZMUEYPkEMBOu1PLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxsexWjYwaKvBWX0KF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugztb5F7S8mkBmMTctV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy1zofO29LOVjSZHXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwia89r2Fd87LS9fLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9B68jrdzL4K8Q0T14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwUVoPfJVEd972_w4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyiOaMGEhKnq7rQAeZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwVWJQA1dQ6vm3UeuF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxwta0vfDgqfdc3xbx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]