Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
#we_should_stop_AI_now Even if AI is not going to turn on us , what would we be , if there is a machine that can do it better then you? Well you ll do something else , oh wait , there is no creative jobs now. Great. Oh , we get better and better tech egeryday? Well i guess , what is there to do if you can do anything? Oh , everything? I am confused and lost now. What is special about us , if the machines do everything for us? Are we their pets? Cats are for sure ruling us. We do everything for them! For sure. And no problems with human comunication , if you have an AI bff , and every movie you watch is autogenerated. And ... I am sorry i am in shambels. I am a wreck i am thinking this for a year now... Anyway , AI cannot be good for humanity by design , and aditionaly , when missused or broken it could potentially kill us all. Seems like a great idea. Yeah , and also invokes deep questions about nature of mind and freedom , in the worst way posible posibly!(srry 4 that.) If we can we should opose its creation. #we_should_stop_AI_now exists now. And if your country has AI developers , and you can protest and stuff , you should probably start a movement , to ban development of the AI. While it is still very expensive to replicate or develop. So we could stop that. I cannot really because of some politic stuff.(people and goverment in my country kinda hate demonstrations , no mater which ones , i guess.) So please do it if you can. You will not be alone , at least eventualy. Only by caring about future and humanity we can save our civilisation from it. #we_should_stop_AI_now Stay safe and if teleporters exist in your time , do not beam , it may be a certain death. We can make a change!
youtube AI Moral Status 2023-12-15T23:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwBBi9VJg6ABxutSBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJU_Ha3H1zsugdvVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1ylcRiR0i1GQOa9J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzMl7heB9iifmEeZUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwAQ-S47UXqcksFXjh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6NS5TjKZuvyA7qjZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgysgqWXhgttQJaRmfx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJcr26pDavs0EJiat4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxlz69Nc7rMHJWLNoB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJL4P1lBJC_f2G-ZB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]