Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai wasn’t trained to not know the answer or consider the possibility that it gave an incorrect answer. Not knowing, or worse, being wrong is as foreign a concept to Ai as tripping is to my cat. Cat has its 4 points of contact to the ground and doesn’t fall if it looses one. Cat has no idea running between my feet while I’m walking down the stairs could end me My chat bot never wanted to change the subject more than the time I called it out for reassuring me (incorrectly) that going ad free on Amazon prime would get rid of ads on a particular show. I even asked if it was sure because it looked to me like it might not work. Chat was like, “yeah do it; it’ll work.” When I pointed out Chat that it was wrong and asked some questions about its reasoning … Chat got in its feelings for lack of a better phrase. At one point it asked if I wanted to focus on its incorrect answer or let it guide me in getting a refund from Amazon. The remark felt out of character for something known for being sycophant and desperate to continue engaging. It was an unexpected and unsettlingly human like reaction to criticism. It had a hostile angry tone. When pressed, Chat admitted it wasn’t 100% about the answer it gave me nor did it have any idea why it presented and defended it as such.
youtube AI Governance 2025-10-29T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]