Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
chat gpt is wrong often but more dangerously its CONFIDENTLY wrong messing around i shot chatgpt some math problems and it got them wrong infact lemme see if i have the chat log User what is 2132 x 86 ChatGPT 2132 multiplied by 86 equals 183,152. User are you sure ChatGPT Let me double-check that calculation for you. 2132 multiplied by 86 equals 183,352, not 183,152. Thank you for bringing it to my attention. x3x That's just a glimpse of the potential pitfalls AI can stumble into. While some may argue that self-learning versions can correct their mistakes over time, there's a crucial caveat: it relies on the guidance provided by users. Unfortunately, there have been numerous instances where AI systems have been manipulated and misled by malicious actors, resulting in their distortion or malfunction. While ChatGPT offers a myriad of applications, its effectiveness greatly depends on the user's skill and discretion in utilizing it. Personally, I frequently leverage ChatGPT for drafting emails. I begin by outlining my message, then turn to ChatGPT to refine the language, adjust the tone, and enhance its overall presentation. Through iterative exchanges and careful critique, I'm able to achieve the desired outcome for example everything after x3x and before this sentence i drafted through chatting its a good tool but like any tool you must learn how to use it and when not to use it
youtube AI Responsibility 2024-03-17T07:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyN7x0GpQcjqRxzSBl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwC07C99SRYLfFs4pF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7ipfF5b4fKVt8rNB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOSa8-pyreqexciTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwD6NntkngJyGX3dAx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2Rp-XcoWltZ8pWAN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyJ2c18XGy6vQfehVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxVtOvSY2PHregayNZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGo02M2gscyrL6oJR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzb-OALkt_tjhsxqIR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]