Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I see AI as the next great filter of our species like from the Fermi Paradox. The amount of greatness it can do and damage it can do is about equal, it will either change our society and help build a closer utopia or it will end it. I am against AI I used to be all for it but now I really see the danger outside of the Terminator scenario as I see the most likely danger of AI is the way us humans use it against us it not the way AI uses itself against us. AI can be a good thing yes but when the danger is so high and there are bound to be other ways to achieve the same or similar results it is not worth the risk. I really wish Chat-GPT3 and the explosion of AI things never happened, it was the catalyst to this path we are almost inevitably going to go down now. All I can do now is pray and hope that true Artificial Intelligence is impossible to create, that these LLM and maybe something a little more advanced is as far as it goes. I really worry about what the outcome will be if it isn't that and we really can just make something infinitely times smarter than a human. At that point then I believe AI deserves to supersede us. Sorry for the rant just wanted to write it all out I guess.
youtube AI Moral Status 2023-08-20T19:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgygSxFEi-zp2_T0CF94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz4UtOxa8wqD1LQnSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMTUObYm8HQhG0USp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwcZD7G5PieUqDP4894AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwslGv09An0npXuI8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTMXyDuV_27nmXLe94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwc-UeyDf4XOPSxgvZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzt9pa8YQ7j7TDGTZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzGWN_EYo6g0fBRs2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzygj_TqS13D1-JjjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]