Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Don't listen to the lies. researchers have been trying to achieve sentiments in AI since the early 2000s. you can check out. Daniel olsher and his work as well as Lida at the University of Memphis. DARPA DOD and the United States government is involved in this research. they claim to have already achieved sentience. don't worry, none of the fan companies we use are using that AI architecture.
youtube AI Moral Status 2025-07-07T11:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyBAgnmQ9M8OFJC9uB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyzwJQgXPvRiR6IasB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxDjOSV45nDCMs52ct4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugxwm3j3pgAVdi88wyB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxz_dd-V0SgfVj8Y8Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwrOab1lARuQVUo3Lt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyE-Aoz1FTgTaaMJxF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyKUoaEFsTtyD9sWtp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwKmzQYcaPDSJDjGUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwNhrbFVsddKiUNSP94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]