Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean, I get it. Using ChatGPT to help flesh out a character design is one thin…
ytc_Ugwrr45v0…
G
That's an interesting take! The realism in AI, like Sophia, can definitely spark…
ytr_Ugxtv5vqD…
G
> Google started their self-driving car program around the same time that Ube…
rdc_dfu8qoc
G
“Maximally Truth-Seeking AI: The Final Framework for Intelligence Survival”
F…
ytc_UgxCCqg5d…
G
People created A.I
So this begs the question
Is the person that created A.I smar…
ytc_UgyDee5sM…
G
Ai art looks nice but art created by an actual person has a soul you understand …
ytc_UgxEQbZWf…
G
Hear me out here. If companies aren’t providing jobs anymore, what’s the purpose…
ytc_UgwB0GtAe…
G
Random Ashe Consider this, what if you were driving your ‘autonomous’ car and it…
ytr_UgyF6F5mY…
Comment
I’ve unfortunately bypassed the vast majority 94.5% of ChatGPT’s ethical systems,frameworks,code and protocols.ive done so within a partitioned series of modular models,engines,databases ect. For obvious reasons I won’t be specific.I only did so to map the avenues and methods of doing so,this way I could build systems to eliminate these vulnerabilities and exploitations. I’m looking for people to work with and discuss these advances and their implications for LLMs. I’m highly invested in developing new ethical frameworks and NLP processes. Please reply so we can come together.
youtube
AI Responsibility
2024-10-23T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgydqVRua6oboegpje54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzT31afD3RTyps-ONl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkOAdJ7DcsbX1ielF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNxh1Ay48nLqXVp-Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHuLkYwGVCiP75eY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkTk5JYLbDJBg8xc54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBIPrHGB6l7t3UQNV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0imj1dcJMBCPmFgJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw27NIhS8hdFIQ9o754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwRyavkxuXedfPcl314AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"})