Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with computational algorithms predicting what a biological aggregation of trillions of human neurons in the human mind will do or think is nowhere close to being trustworthy. For example: A man walks into an ice cream store twice a week on Wednesdays and Saturdays and orders an ice cream cone. On Wednesdays he'll order any number of their different 31 flavors, but on Saturday he always orders Rocky Road, as he has done for the past three years since he began doing so. The next upcoming Saturday, what are the chances he will order Rocky Road? An algorithm may make the calculation from the last three years of Saturday purchases that the chances are 99.98% he'll order Rocky Road from historical reference. However, from the perspective of human cognition, the chances of him ordering Rocky Road is 1 in 31 (given the selection of flavors) all the time, every time. The algorithm cannot make the leap of 'assumption' as to why he chooses Rocky Road on all the past Saturdays and on the 'prediction' of whether he will this upcoming Saturday.
youtube AI Surveillance 2020-01-04T20:0… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxsQhmJhf-aUgnFU5p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyrvG5TVAv-9WO_cqR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxmhAKcZnlmcPlyHA54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwqh6QX3Q1P9Tv4kWd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzVlSdN8jp448jYQcN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQzmrEKFyxxNX65Wh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxjj2ZB5KgM0jC69oh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx5qUFpuBSvJ2-2gZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwCgUh2QaG4bCsnsPZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz6yweQIZlOQutC2n54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]