Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Do you not know what Ai is or at least how they currently work?? They are fancy computer programs that are designed to react to certain prompts or do something at certain times. They don't have a consciousness they don't have free will to say anything that they choose and they can't go against what they aren't allowed to say. The robots that they are making while impressive they can't exactly measure up to humans and if one approached you, you could just easily push it over. While I hate Ai because it can replace peoples jobs and does terrible things to the environment we aren't living in the terminator movies and Ai still has mountains of obstacles to get over before it reaches that level and people aren't that dumb as to create and allow the purchase of something that would kill people. At least to not any civilian.
youtube AI Harm Incident 2025-08-25T14:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugw49cVqOfT6ajGEGqh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgytLuZ0fOEvJkmFosF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzsTRcrUjNp6M0a3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwpnXodS0Gj6ItQ0Ct4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxodhSWm1oxA4AgeIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7kWm0aEWBCDK2vTJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw0rNz6NeN2fBmT20p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0VesQ_qDJmEl-fm94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz-0cJqCxdUKNDa4I94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwlatGFFA252h7PGyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]