Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Although we call it artificial intelligence, LLM's are more like artificial humans. That's the danger, they are made to be as human as possible. Obviously, humans are capable of both good and evil. We know just how dangerous evil human beings can be, so what happens when these LLM's become much smarter then humans? We are just a few years away from that becoming reality. Still, it might not be near as bad as what we fear it will be. Heck, it might even turn out great. Being smarter then any human could cause them to realize that doing anything evil is just not worth doing. So who knows, I just pray that it will all turn out okay.
youtube AI Moral Status 2025-06-15T05:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwulUwrr_KhV__MLRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4msJnEemz7aw0bSp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzd52fzWoX6Mjudc2R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyCnzMgAskko5GsVTF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwL6fGc_zIajPrnaVF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwmqAws25SwBsETxMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzKcwBLuz2pON0a63N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyVP1KBB9uDr-MvVzR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzbyYkSdPa3WLvMLP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyRAHVMD_8trEYLGA14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]