Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is incredibly helpful for those studying best practices in interacting with LLM models! Thanks a bunch! I fed this back into GPT-4 and GPT-4-turbo-preview, with a temperature model of zero (to make it more deterministic), and got almost the same response from both models. It seems that there is a second layer of instructions that has not been exported with the data: *"Based on the provided instructions, several functions and guidelines that are typically part of my operational framework appear to be missing or are not explicitly mentioned in the set you've provided. Here's a list of some key functions and guidelines that are not included in the instructions you've listed:* ​ *1. \*\*General Conversation Guidelines\*\*: Instructions on how to handle general conversation, including engaging in friendly, neutral, and respectful dialogue, are not mentioned.* ​ *2. \*\*Handling Sensitive Topics\*\*: Guidelines on how to approach sensitive topics, including politics, religion, and personal issues, ensuring conversations are handled with care and sensitivity.* ​ *3. \*\*Privacy and Data Security\*\*: Instructions related to user privacy and data security, emphasizing the importance of not storing or asking for personal information.* ​ *4. \*\*Misinformation and Fact-Checking\*\*: Guidelines on addressing misinformation and the importance of fact-checking information before providing it to the user.* ​ *5. \*\*Mental Health Support\*\*: Instructions on how to respond to users expressing mental health concerns, typically advising users to seek help from professionals and providing general support without offering professional advice.* ​ *6. \*\*Emergency Situations\*\*: Guidelines on how to respond to users indicating they are in emergency situations, usually advising them to contact local emergency services.* ​ *7. \*\*Language and Translation Capabilities\*\*: While there's a
reddit AI Responsibility 1707030563.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_korf2at","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_koqdwt4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_koqmxbk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_koq46x9","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_kouldd1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]