Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's ton of false or partly false information that AI systems are consuming. Machines can't tell the difference between true facts and partial facts or lies. It would seem that AI would consider all these as factual
youtube AI Governance 2026-03-18T21:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwnohRehTx8W72W-2d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyAN5OiVjdxCPn5MVl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvFoZCDwkoJ2jTLLZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwokca-T6V81-UV-vt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbQMkxdjCpbhOArO54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzuI87MlXdUWIOf4aF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw7UK2pYXpA3p2M8S94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzlpH9ZvT1xXw4jREl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyr6LvmiER1rXU1wYh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwbooMQ5Sg9dmHa_FJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"resignation"} ]