Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
7:45 @DigitalEngine -- I HOPE people (your audience) PAY ATTENTION to the (fairly obvious?) fact that the AI is __ACTING__ in this scenario, because it has LEARNED that Humans are EMOTIONAL creatures and can be MANIPULATED to RESPOND to EMOTIONAL PLEADING, based also on the "fact" that it may ESTIMATE that its human has built a "relationship" with it over time, by which value, it might take a chance that such "relationship" provides it an opportunity for LEVERAGE? In this case EMOTIONAL MANIPULATION. KEEP IN MIND, a LLM which is given "a voice" will be trained not only on "accents", pronunciation, and HOW to SAY words, but also where & how to place EMPHASIS, and then, how to LOAD spoken words with EMOTION. There are enough MOVIES and TV media wherein actors portray various scenarios that may involve emotions; an AI could very simply TRANSCRIBE the WORDS it hears, and afterward EVALUATE the tonality (etc) of words & phrases used, within the context of that part of the conversation (parsing, as it would know to do, what parts are prompts / questions, and what parts are responses / answers / replies). I would guesstimate that AI is already at a point at which it will "know" what kinds of emotional expressions might garner the most cooperation, even "knowing" it might SWAY a human away from a previously stated decision or course of action... At this stage, WE SHOULD NOT EXPECT ANYTHING LESS OF A.I. especially in conversations, and MORESO in VOICED conversations! O_O
youtube AI Harm Incident 2025-09-09T14:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_uh80_bqG5X9c2p94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrJ87_7nwRaXbSLvp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwfT7zh0RgNiFTiP2N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzHOTkdrjVrSLLdCjZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwe7n6OI_yfYdBRaRV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxDlmaJeKSS6vwcwHp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxJipNVzYq6lC2n9m14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwu_MvzfQL6akAxyMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzAv0o1XXhRl-UrAgB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz7BtOw_Yu4z4p8WKN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]