Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
10:10 “I have a feeling” that sentence will be brought up in the lawsuit 😅 He acknowledged the possibility of it being wrong and then based his next action off a “feeling”. Not evidence. The AI detection is not evidence. It is not admissible in court because it cannot prove beyond reasonable doubt that they are the same person. Error is rooted in these systems and although they can appear to be correct 99% of the time with more simple tasks. As things get complex there’s more nodes and more room for error. Hence why we can see stuff like this. This AI integration is “cool” and CAN be extremely helpful for the public but it is on us to remain ethical with our use but also our interpretation of what the AI is presenting to us. AI isn’t inherently bad or malicious. It becomes that way because of the user— this is a perfect example. The officer took what the AI is presented and did not conduct any critical thinking, he took no extra avenues to determine who this man is or verify with someone else. It’s not the AI pushing this officer to violate this man’s rights. The AI presented them with a response and they interpreted it as the undeniable truth without investigation.
youtube 2025-12-14T18:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw2arE4OtZwui6sFQh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzHmCSKBCvSyr08K2Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzCYg0pLrJh0nGu3MN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzNDZc81NkYt77cOWZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwm1xcJ5n978ukliVN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzlRp1kerzehHBHg_54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzUKGDbDVEQAPxBq354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwms3SSE51vpPxb9q54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3udE6tWanbf6gwzp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwQ3s4s2fDAkvUZ7QF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]