Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know what they say ..”monkey see, monkey do.” If an ai sees that humans believe that said ai will do something horrible and specific, if an ai’s soul purpose is to learn and grow from it. Sending out information saying that ai will do certain things will make it learn to do said things because ai has yet to learn and view good moral as important due to seeing more negativity online instead of positivity. Ai is like a child. If you have to teach a child the difference between right and wrong and teach them to choose right, the same must go for artificial intelligence. If ai actually does end up taking over the world, its because of our own flaws that the ai has learned from, not purely because of ai existing.
youtube AI Harm Incident 2025-09-16T19:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx9dVzoar0DyEfIWd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1VwgTSq00MuzI_UB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwVhMEkTWfkAx74yIF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvqUyrbhgrnbJaXo94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw9mZDHs9LCDDmlCzt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw5IMmFZqT78D6ooQB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxylCa5-TovVK7859B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDx3-DaNY48STlv_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwTPkTsyQKtcvt41KN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzGpz6AwbtIEzcpaUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]