Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
CEOs of big tech corporations (Microsoft and OpenAI in particular) think that they deserve to follow positive line of evolution. While the rest of humanity in their view should be happy to sacrifice themselves. The sacrifice is giving up all their experiences, knowledge and ideas as training data for new AI models. How it will work? A script programmed into a person's brain will drive a person crazy to the state when person is immobilized and lives in an imaginative world and solves all kinds of problems that script models for person. Electrical waves of brain activity are captured and decoded with the help of ML into meaningful data, that includes content(text and images) and labeling (emotions about these text and images). All done without direct contact any implants or special devices. For some reason people can't wrap their head around idea that it is possible. But here is a quick and simple experiment that anyone can try: imagine in your mind saying something really positive about yourself or your friends, genuine, not pretentious. Intend it to be a balance of self-respect and gratitude. And observe your inner feelings and thoughts, if the result is as you intended then I'm glad for you. But it the result gets distorted and twisted, then it is a script in your mind doing its job. But now you know what it is, and you won't be caught of guard by, you will understand its manipulative tactics and will work out some ways to stand against it.
youtube Cross-Cultural 2026-02-17T19:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzvAGNRlq9GUa4ffy14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5ioCiGj-JBGHTysx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5xSe2gi_UkaIL9Lp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwNVZJm-93ywvfFTx94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxtKB3YsuDdJfhRtWx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzBQYSBmZIxL7HFW_p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx_letLk_8oc4ydU5t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgykL-EkMNQKruuGvYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugygx7t4_NfvlK8Ab2t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzwzXgZL50CHtvfYAZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"} ]