Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting ChatGPT response... skews what an anagram is... why is not ´Satoshi Nakamoto´ an anagram of ´I am not ok as a host´? While it is true that both "Satoshi Nakamoto" and "I am not OK as a host" contain the same set of letters, the latter is not an anagram of the former because not all of the letters are used in the same frequency or order. For example, the letters "h", "s", and "t" appear twice in "Satoshi Nakamoto", but only once in "I am not OK as a host". Similarly, the letters "m" and "k" appear twice in "I am not OK as a host", but are not present in "Satoshi Nakamoto". To form an anagram, all of the letters from the original word or phrase must be used exactly once (although additional letters can also be added, if desired). In the case of "Satoshi Nakamoto", there are many possible anagrams that can be formed using all of the letters, such as "Aim to stash a nook" or "Oh, a tsunami took a". Only it got the anagram wrong. Why?
youtube AI Moral Status 2023-03-05T21:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgznNXLODYxVUr2BGcN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwz8QEPyqn6w68NQJB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzXXQETBrnwF5iNtTl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzdpQBMDXk_ivmn4qh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwTu98UvOpm6KmjFb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytFXUrnlsa6dZc9BF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwONrwxFew8--cpysN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw8e8oL57UbXZL30m14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYqTgI1lf5HecF6p14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwtxCAWoqdKJ0At8Ml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]