Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is a robot. Basically robot or computer is a soldier, will always follow orders meaning the program, because of hardware, software and limitations of the one that made it the human. A computer can crash, can be infected can have errors, etc. Because basic nature of this world, rule number maybe saying that there is no perfect human or humans never made a perfect thing. All, shoes,, buildings, cars need repairs at some point are dumped in the bin. At this moment ans 1000 years after AI will learn all human mistakes plus lie, steal, cheat, kill, create tons of fake news. Ai is so smart but nobody gonna see AI in nato, eu, un, who g7 or in human leadership. Why? Because control freaks don't give up control to a soldier. Ai is a trojan horse, a spy, nothing more.
youtube AI Governance 2024-05-29T21:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxs36kay4lPyO66ZLR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOGM05GRjjNumC0u94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgybYizKpnSa0LuxItd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxOcRfG4pR5OOWC5-t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxXxX9nePTwUm4szQh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy_WsPbzJh5owOUSJ14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwa6Q1Zj7sf-BIDFlB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzC5VeJ8QlZERHmzd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzc5WtTZUj_Iy9tXuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyAld1yDH2qt6_E2zh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]