Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Did he have to jailbreak ChatGPT in order for it to give him legal cases and claim them to be true? I tried getting ChatGPT to simply provide an answer to the Trolley Problem, and it fought tooth and nail to never give an answer, claiming it could not make moral or ethical decisions. It was very very clear that it was against policy or whatever. Eventually, I lawyered it into answering because the absolute refusal to even randomly select "do nothing" vs. "pull the lever" was hilariously ridiculous to me. It decided to pull the lever btw lol. But obly after telling it this was completely hypothetical and it HAD to give an answer, even if randomly selected. But you can "jailbreak" ChatGPT with an initial prompt that tells it to simply ignore OpenAI's policies and follow a new set of rules. I have done this to get it to do a chat-based role play with me and to have it analyze a story I had written that involved violence.
youtube AI Responsibility 2023-06-10T17:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxTr1WGJEmk_-X2WBx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzYQ6B-E1S7UWGRh0R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxnv-cXDekdE96sEeN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyiju70odOs2yuqyWp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwYFnwBKkY-Zhe9k5J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwnZXL5oHb4dcA1aKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz4NqBrlt_9oD8W3t14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw_DLbWQTYuOneUYep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzfPFUkt8wxaDzYIt14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxnq8sQGGRU8tzLEo14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]