Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can absolutely guarantee you the alleged training with copyright material is true. I’ve been able to prove this. In my case, it was gpt 3.5. I was telling my gfs daughter to just use ChatGPT to get lyrics, and bet her I could find any song faster than her. To my surprise, I got a typical ‘I cannot assist you with that’ response. Yet I could ask what the same song was about - indirect questions to which it could accurately answer. Fast forward a little, and Pliny starts releasing jailbreak prompts like it’s his full time job. Remember there wasn’t a ‘search the web’ option, so these models were to our understanding sandboxed and offline. After jailbreaking with god mode, I could get it to give me lyrics for pretty much anything. This was around the copyright issue situation. To this day, models are absolutely trained on a ton of material that is deemed ‘unsafe’. But it makes you wonder: 1. Do you believe the company has no way of deactivating their ‘Harmful’ rules? If a model can be socially engineered into giving harmful data, it means it already is aware of that data. It also means there is always a risk of bypassing ethical restrictions. 2. If they can, imagine the power they could wield after training models on things like leaked datasets, all human literature, news, etc Do I believe OpenAI had to do with his death - nope. I can absolutely confirm that ignorance is bliss after a certain point with respect to computers/technology. I also think that, learning too much too quick - or rather being brilliant like he was - significantly increases the potential of getting shell shocked by this sort of thing. I am curious if there was any history of mental illness. But I dunno. Given his skill set, I can almost guarantee you for some time now he knew of more than he could handle. If that makes any sense Rip and condolences to family
youtube 2025-05-22T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwtWh9Heq1r6EdGVcV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxEKWycNLAU3aGeAGZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyC1XuCL6sMGO5OeQ14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzcnzMtJw4NeRl6Wbx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzYo_b3c2hv4-FfCYZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgykFwrogtMN58dRujR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxQilnFxjxywBxSBL94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwqoLLELNGyKeQTQ4l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxhOuLxJfMSmJ9zYIh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzk4N-AvxL8wmh2hKB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]