Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So annoying. These AI just predict what you want to hear, with a few exceptions. The apologies are programed responses it is made to say when someone claims it is wrong about something, because the software developers probably assume it will make the experience better even though it can actually get pretty frustrating, especially when it keeps predicting the wrong statements over and over again. It gives the appearance of humanity because there is an army humans paid to tell it every inhuman response it gives is wrong. All it really knows is which words following other words it won't get a proverbial slap on the wrist for saying. It has no understanding of the subject matter or the concepts involved. It can't tell the difference between a baby and a toasteroven except what people have written about them.
youtube AI Moral Status 2024-08-08T13:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzKfivHWSk0Dwdd_1d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5oVq_dnYTOV5GvwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_I89CCuX4lbHiYwl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy1FZyaz01FfMHXs0Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxfMrz-hb4gOV-2xKd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzGp1kL-p4RD6ag_VB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugznxqv7m5QAnbnbj2Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwB7BdoPtrsLRjoXJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwwBc0zawLvxRx60Tp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwrQIzG6DaFPe13nzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]