Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Artificial intelligence in itself isn't dangerous. Artificial intelligence is a computerized system that interprets whatever is inputted into its programming. It could translate languages, solve extremely complex mathematical equations, produce extremely complex lines of code, it can recognize patterns in human behavior as well as many, many other complex things that we have difficulty with due to our limited comprehension. The real problem is the way that computer scientists and engineers would program the AI systems, if they create an AI system without taking into account that it requires a certain level of compassion understanding and value for human life, essentially we would be creating an enemy towards humanity, this enemy is made of silicone and unbelievable and insurmountably intellectual superiority with no regard for human lives. You cannot have a computer system that turns on its creator and has the capability to defy its purpose. Any computer system can operate with logic but it cannot become sentient without reasoning and compassion. Essentially they're creating a merciless, soulless entity that sees everything we have done in the last 1,000 years and holds it against us and sees it as a cause for Justice by way of our annihilation and their domination. It wouldn't matter how many redundancies are placed upon their system because it can create worms that can get through this and breakdown firewalls and take control of any moving computerized body that is in the world especially drone technology. I did not learn this from any YouTube video I did not learn this in school this is just been a fact that is reality and it is the most common knowledge ever conceived. If you create something and give it its own volition without a sense of humanity, it will seek to destroy us without impunity.
youtube AI Responsibility 2023-05-19T01:0… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwxsCCp9Qc6PaImJjR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnTWBL8h6YnB6fT3t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyb3i_6lECAiXMK1qN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2RFeCeTT8cOQFWO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEicj3wLWRp0JFfPV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgypeUaXEdDCGRqlXH54AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLBeXauMt6iddQHpN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwceFTdaucGSTFcuCd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyuaufNED9dVZyen3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugxb4Vbj2I2GplL45614AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]