Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually, modern AI language models don't take training data from people who use them. They tried that back in the early aughts and it took like two days to turn the AI into a neo-Nazi. So they haven't been doing that this time around. That said, you probably still shouldn't use AI for your therapist because 1. the AI is reporting all your secrets to their corporate overlords. 2. Every time you write anything to an AI, the amount of energy and water it uses to respond to you is like dumping an entire bottled water full of boiling water into the garbage. This is because the computers that AI runs on are water cooled. Anything less and they would melt from the heat they generate. 3. AI is not intelligent, at all. It is a glorified calculator, imitating human speech the same way calculators imitate numbers. A cockroach has more actual intelligence than an AI does. Language and other patterns are all just math to them. Because of this, they are extremely stupid. There have already been cases of AI telling people to kill themselves, or responding to expressions of suicidal thoughts by providing people with the best ways to kill themselves. Seriously, instead of calling it Artificial Intelligence, we should be calling it Mathematical Pattern Matching Software (MPMS).
youtube AI Moral Status 2024-09-03T23:1… ♥ 6
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw6cdFJzbb8M95nwTN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMspPwkdkzOPDjm2J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwA3gQcXXvyBsvDxCZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_y38rnUuQUdzTayl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqPvHyBfjUUjbSsH94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxpn1rbcwfig99cA_Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJT6AQW181YkiYp394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyzrUQbbY3u3XBidKB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy0zANgEQXfKMT1_e94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzO-JidaCu7Y7wrXLl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]