Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think for the foreseeable future, AIs won't be good or evil, they will be truly neutral, literally doing whatever you tell them to, doing anything possible to achieve its goals unless parameters are put in place. A good hypothetical example is that of the "paperclip optmizer". A paperclip manufacturing company tells an AI to make as many paperclips as possible, but the company doesn't set any parameters on it. Years later, the entirety of humanity is dead and all thats left are robots working tirelessly to make as many paperclips as possible. An AI doesn't have to necessarily be evil to harm us, all it takes is some idiot telling it to do something with limited to no parameters.
youtube AI Moral Status 2023-12-07T06:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz6lCWMyna-A4p_opx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwMotypJQgs_m3JFlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgywSIbmpsSjJPNc8SR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwwycaGOCAnG9d44Dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzFMtjfnzC2BwJNfsd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz08LF6Ni62Q-bwUjB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZJ2HVfDHN-vp4UGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxu3YWzqu-qjg-J2NB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzY5drA1yysvbg3tz54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRl2PJtzGBkekh6xh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]