Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs are simulating the talking bit of our brains. The AI companies still don't have anything to simulate mental models and understanding. LLMs will be very useful combined with future AI capabilities. What we have right now is like a person winging the answer to a question without giving the matter any deep thought. At a shallow level, it's plausible -- that's all it can guarantee.
youtube AI Responsibility 2025-10-01T16:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxwMAnciHw7LkWvmLR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxlFBPE0zkei83oeXN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyKCoONbZBiv6Dk-cB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxT7CkFlan_8u_k4D94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyUtWjYCBMnk1-oJOd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzKlKEmpRmtEVHd99N4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyftXnjXbdcAPS2Cg54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxmjZ4bdnR1fq9xJWt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzNxAK9B05CXsCXouh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzIoTNTPJzcd8I6W5l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}]