Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really didn\t get the simulated annealing approach. How did this algorithm end up more likely to end up at the global maximum? Wouldn't it be just as likely to jump from the "correct hill" to a "wrong hill" as vice versa? After spending so much time explaining the problem, the solution didn't get much attention?
youtube AI Governance 2023-12-23T13:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwGRpSdUbVxGf-GK9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzYwOVgSiwdzO400Ad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzx08ogPtE5PG7Nua94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwaEZYOBZ-hOvDENyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2uQVRdIHcBtBk1IN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwj68AW9Y5jPCaNokt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxpPSRkAEsB5CJioNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugygvba-J_B-C2qD1b94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgykAtgSVh87ly0zJEt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwhGp3uO17CaJFeLY54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]