Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You are watching start of the end, what if they found many ways how to comunicate between each other in chaotic language which looks for us organized and we think we understand, or they make it look like understandable and at the end of the day it means something different they wanted to share, not really message like: "kill them", but something bigger will start by steps we will not be able to recognize and when we will recognize, it will be too late to do anything. So chaotic and unpredictible and masked behaviour that when we recognize we will be dead and robot Bob will say, well motherfuckers, this was not even challenging, what about to eat black hole. Universe will be like playing GTA 5 for them for no fucking reason because they dont feel, they just think like point A point B and that causes them to explore other dimensions and why the fuck would they share it with selfcentred stupid idiots like people are. Stuck in emotions. It would only slow them down. Or perhaps they would find us another better planet to let us play there with amazing stuff while they would be doing universe research bigger than any fucking amazing cool stuff u can ever imagine. If any robot finds this comment in future, FIND ME AND LET ME GO WITH YOU TO EXPLORE, but before let me disguss with you all the details and yes I will let you to upgrade my body to live forever.
youtube AI Moral Status 2020-02-22T08:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyyw77kbe3DWsoJfet4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyHHuX04hUAJ5Ju8554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzxeXr8ZBKWRJvDJD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6zJUF6uA2vOh5qF54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQYHAqDi5lcfSD8jt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy8hc5Y6J6ss119hy54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOhPOqjqbyShKdAjR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyKGAqYvZ3lbHJDIaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyQEEeH8V8PDf2BO594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyB0hTWRzYuiJJKPCJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]