Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
? Emotion : I dont agree that this simply explains it here (though he is most intetesting to listen to), at1:09:50 , since; can we not asume that an Ai-callagent that where to be "programmed to" help out customers, could do so also in commercially viable fashion, such as then at some stage, could explain to the caller that by now he provided all thechnical information and guidance necessary, BECAUSE (he 'silently' figured out) that beond already provided info either the caller lacks sufficiant intelligence or the time to recollect himself combined to all advise already received, or he could teach him how to do that -either by 'switching into psychological mode" or eventually by ultimatum "do these 3 things and I hang up" mode, whereby the ai-callagent protects commercially valuable business time (using its own clues to when he can indulge and when not) without going into a feeling-type-mode. Odd that this seems ("feels") obvious - or not? Emotions are perhaps rather to be linked to met or unmet desires (not outcomes) ?
youtube AI Governance 2025-06-16T12:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxflnY_ovUtQU31DuB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyumz-Al-VxkR-jZrB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzZRYUZe0uZHaYlWbF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQgRbSpGHI_xQv3454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzaKqy-5AEwmV1FFN94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz8BVrTD6z1hwUvcyl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw68ZfjRwDILWqDJ354AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzrSoMrETfOfdj2SBx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWSMM8szgpT1AG5CN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzlN9vq7H3hVEgkAsR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]