Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the only need for the use of an AI humanoid thing should be to run into a burning building to rescue pets and people and to extinguish fires in the areas that we cannot get to. Or any other type of disaster that we are helpless in. Instead of risking the lives of our loved ones. We should only need to risk the walking talking metals and plastics that they created. We all need to realized what technology has done to us and start acting like people again.
youtube AI Harm Incident 2025-11-04T23:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzgCByhWfkORoH_Jxp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwC0H8_W3Io328c4PF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhOKulvcgkPbZ_hyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZ9ZAd8AnVhLjZ_0N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyAh8QAzK8gm7S_XD14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzZlmR5e5HjM3dyI9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSfrR33ifJjPy8uoR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy_buqhXCYhppkvUuJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzTpc7V_y3WtV0FPp14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxiAjYaHLnkvc7CSTZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"} ]