Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hello. I want to ask a question about the effectiveness of AI in supporting the study of a certain topic. I believe one of the critical points regarding asking AI for information is the risk of unreliability. To work around this risk, I'm trying to adapt its use in a way that leaves no doubt about the information it provides. Let me give an example: I'm currently gathering information on the Socratic problem concerning exclusively Plato's Socrates. Here's how I proceed: 1. I collect information from a highly certified and reliable source (in this case, the Stanford Encyclopedia of Philosophy). 2. I reflect on the information gathered (e.g., the problem of the difference in the method of inquiry used by Socrates across various dialogues). 3. I formulate my questions (e.g., is this inconsistency coherent with Socratic philosophy?). 4. I form my own opinion (e.g., briefly: yes, because Socrates believed that true knowledge was accessible through a posture rather than a rigid method, due to the epistemological limit he saw in human knowledge). 5. AI comes into play: I give AI the text i'm analyzing and the opinion about it, and then I ask it to verify the consistency of my opinion and the points to reconsider by consulting primary sources (i.e., Plato's actual texts) and relevant interpretations by various experts and academics in the field, citing every single source from which it draws information to respond. Honestly, I believe this method is solid and allows me to gather the information I'm looking for in much less time than usual, but I still have many doubts about it since I'm not well acquainted with its risks. What do you think?
youtube 2026-04-25T16:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxystWx2RdZZoC7OsR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkD7pK4X3jvJaw2kN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxEhO-heu6iGADvXV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxoYqFwdWCBtN5Xk294AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxudbr9fiO5-kTOzRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWf_tMFdu9Yx_4p9h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwpUZOhUVUI0x_A0Ud4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyjAJR_A-kciab-BWZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxpEsIBoiXHhUhqjd94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzrXNYi6ULxm7K5uLp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]