Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So far, someone still needs to tell AI what they want and how to put things together. But as you interact, it now has a body of knowledge on how a certain type of person thinks about solving problems. Right now, it can very quickly catalogue and arrange information. It still makes many mistakes, leaves out information you tell it while having a conversation. Allso, the results are not repeatable. Even if you input the exact same information.
youtube Cross-Cultural 2025-10-16T08:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugxz4b9QiD_v7lBhAup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzWsyhjeA86khc7Irl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyyrSRbK20qrDJd1w54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugx3YmOmFdm4Xs7_tB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgwKf1C0tRkHL4YE5Md4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgzpHtrhLD7twiQ_EFR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwegOTnBEXK_OUAX_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyHAaD_5wa7WPxh-Od4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzwS_yCGXhDNJKPRPl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwEsR7bscQNgAhI00F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]