Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The fundamental logic of AI is “vague recollection”, it takes data to generate data in a token soup. Issue is, if there is a small sample size of certain content, then it can reproduce the sample. So, I could see this going either way, OpenAI may say “there wasn’t enough data to make it transformative”
youtube AI Responsibility 2026-04-11T14:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgycLcKfDPJ0OcNaiit4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzoBBHI5EhYG6_6fGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy2dnUF9auUWTAWOrh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxMjS5l0W0bz6wBsDd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyLpQKUwMhVnnHXVQl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyG5JOx9UUMJB7kW7B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKwbdP3imPRYrJbkl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxMKa3SyJF0IMpHR954AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwMYEJ5iEj2PeOUxOV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz9yzUlH1isEinOJHd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"})