Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why is everyone arfaid of super intelligent AI? If you look at this AI, it is only designed for recognizing faces. It wont gain consciousness. But, there might be AI like those in the future. It only takes sombody with the wrong intent to do it. But it is highly doubtful. How can the AI even fight us? Sure, if it advance enough it can take down internet services but we will just go back to the old times. If thet had the ability to control drones or cameras, that will become a problem. But how will somebody even make a super intelligent AI? To make an AI, you need LOTS of data. Like this face recognizion AI needed hundreds of thousands of human faces. So, how exactly do we get data to train human-like thinking? One may say to use the data from the internet but how are you gonna filter out the useful information to the fake information? If you only rely on trusted sources, you will have less data making the AI less smarter. But it doesnt need to be smart. As long as you give it tio much control on intrastructure and it can make human-like decision (not logical decision) then we are doom. But hey thanks for listening to my ted talk.
youtube AI Bias 2020-07-03T14:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxDmiAWhdotuce_bo54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyw-l-UMzpxSihnyuF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxiAg8UxbKEL3LbdWx4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyLz6BRVPt9pZy_cHt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0Xvsbp6xz_htaTOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4y2obkFVZqppQFXd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgznOiEY79-tI3y_ig54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx27u4Cnuh4SwnWb4R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxvr1wFb70ARD1jcKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyuleTLVrtBUihu_2Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]