Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Human empathy and the timeline problem. Saying that as Ai advances it will make many human lives and their children and possibly grandchildren's lives better. Does human empathy live beyond that rather short timeline thought? It's like "would you give your next door neighbor$50 if they were in temporary need of food.Then goes on to how about 4 doors down, two streets away, next county, someone struggling in Bosnia?" Where does the empathy become disassociated? Human life is very short and they need it to be as great as possible for themselves, loved ones and close friends. Where is that same empathy for 4 generations from now? How is that balanced for their relatively short lives? Let's say the good scientists win against the greedy. The human problem still is up against its profound history of not fixing until it's broken. Better defined as selfish greed. Self being the personal, the family or the tribe. Meanwhile, who gets crushed while others gain? Capitalism is said to still be the greatest form. While unregulated capitalism creates misery for the masses. I guess a bit of Roosevelt vs. Reaganite. Aye!
youtube AI Governance 2025-12-05T13:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzpRT-RIzZLgsKsr_V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugze-PMqeImaVU96CFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_rSeHpqwBHmbGpml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzGRWCEYAE6UqRDWyJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyrYvCiwZZk_Sgde9x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgybbvgSuk-tVtK0qix4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxCWgyza2JR7dqZbHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2evnR1jQx8Lwdqfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxyiZAceruCIrM9UEZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx6uGHjO2N_bmvn3x54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"} ]