Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:05 That's technically how "AI" works. Each LLM instance is, effectively, a different "person", as it reads over every prior message (to a point, depending on parameters) when generating responses. It doesn't really "remember" anything it isn't coded to on the back end, and once a model is changed, the base context is altered to a point where it technically wouldn't be the same "individual", if you want to refer to it as one.
youtube AI Harm Incident 2025-11-25T06:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzf7NmOLhm2tVkK6ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw4ZlM6dc3aYarFbFh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyaXW-dZhx1sWFwbtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_Ugyp6IWnXQ8C4Cfvzxp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzRUHUAWGjGciAoLmh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAfQzn9xFZAwWQu9B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzH_C6cj0c7lz9RwSh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEmUVBCSX7Zxm_GGV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxkN4Q32zWNT0Bx7wJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYrAq-42u-RZvREOJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"} ]