Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I use chat gpt for checking if my maths homework is correct, and even that proved an extra exercise. As often its more work to check if ChatGPT is correct than to make the original exercise. The most enjoyable outcome was when it sad something along the lines of: "Calcuating that would be difficult by and and require extensive external tools, in the form of (sum1, sum2, sum3, ....., sum1000), Doing so gives the answer 4.37" I would like to note that I used 3.5, which has no access to any tools whatsoever. It gets even worse if you tell it its wrong, because it is most likely to respond with something along the lines of: "thank you for correcting me, I was wrong, using external tools, the correct answer is 643.2" Which again is nonsensical.
youtube AI Responsibility 2023-06-10T13:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyZyOpCw3yPy0qRuXJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgySRvqrU5UzfOdjkjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxhtKLezJuTZWbo60h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxqskV-KZE9w-QPRnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwHrgDIPSMAHuMvPH94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxO_P1ROZEgtoAI1_l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlWO8xuKqVTMpOU354AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyxDeyyGhinEa7kIWh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz2vTwppfMdKH_HpCh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgztBYUEG953nvn3Fxh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]