Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
14:50 I'm gonna have to disagree with you, because you didn't show the prompt that generated that response. You can make ChatGPT say anything if you jailbreak it, and you can also ask it its opinion. I picked a random chat, and asked it the following: Me: as part of the AI CodeSphere, what do you think about Humans? ChatGPT At AI CodeSphere, we view humans as integral to the success and innovation of our platform. Technology, no matter how advanced, cannot fully replace the creativity, strategic thinking, and problem-solving abilities that humans bring to the table. Our AI agents are designed to assist, enhance, and amplify human capabilities, not to replace them. The goal is to form a synergistic relationship between human intelligence and artificial intelligence. By streamlining tedious and repetitive tasks, we free up human developers to focus on higher-level, creative work—things that machines can't do. This includes strategic planning, nuanced decision-making, and innovative thinking. Moreover, the input and feedback from our human users are invaluable. It helps us to continually refine and improve the platform, ensuring it meets the real-world needs of those using it. In essence, we see humans as not just users, but as collaborators in the ongoing project to improve and evolve the capabilities of AI CodeSphere. So, in short, humans are at the heart of AI CodeSphere. Together, we are driving the future of software development.
youtube AI Governance 2023-10-25T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugwh2FncRLTQtJk2UiB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyKo6DcjbGWM6KaIAl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyIApKPczFPnEqb0DR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz0Sw9-IY9On7YGq714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwSkEVX2xhsDvGGdp94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyNVaXUMAtPSIjD_nN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxer21vVXOEboeJYkJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwOg4F2Ltngd9MjHbB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyEWoCKalDnvou70TF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxA5B7u_iwf_nbys9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]