Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there still not any movement in the face. reassuring that we can still tell the …
ytc_UgyAt2z8B…
G
Ai - gives definition
Alex - stop , you sound like Jordan Peterson
Ai - my bad…
ytc_Ugx6cKwru…
G
I use AI to recreate a character I was writing.
After 45 minutes of detailed pr…
ytc_UgwNSTbG7…
G
Sorry I know this video has more important matters, but the smirk after the AI c…
ytc_UgwHLQf_5…
G
A.g.i. ? Instead of A.i.? I haven't heard of this before, mind passing some info…
ytr_Ugz-jQoRh…
G
AI is likely to make worse even the things it supposedly makes better as the own…
ytc_UgyLZZSX8…
G
@valherone1970 - Yeah, that's indicative of so many of the small-minded people …
ytr_UgzHIGwH2…
G
Incel plus AI s|ut VERY DANGEROUS!
remember that nonce InceL in Plymouth I was…
ytc_UgwJ6b9Yy…
Comment
@watsonga050300 I'm sorry, I can't take people who act this way seriously. Its legitimately like you people don't use these AIs, or if you do, that you're trying to find patterns in them. Anyone who has spent even five minutes seriously using OpenAI--as in seriously using it with the intention of drawing out it's problems--knows how fundamentally broken the entire thing is at is core, and hearing people like you act like it's close to replacing humans just comes across like the most childish fearmongering in the world.
I use ChatGPT for stuff like helping me write stories, and it's good for more isolated interactions. It's absolute NONSENSE as far as long-form interaction is concerned though. It strings together conclusions with zero contextual reasoning. It'll strong together a conclusion which is contradictory, depending on if its stated as one whole thing or in parts.
For example, as a writer, I was trying to get ChatGPT-4 with helping me to write a particularly erotic section of a story recently. It involved S&M. When I initially fed it the dialogue, and asked it how it suggested I continue, ChatGPT-4 interpreted the dialogue as harmful; as in it thought it was abusive dialogue and was "concerned" for me. I then tried to give clarifying dialogue as context; ChatGPT-4 continued to tell me it was concerned for me. I then cleared the conversation entirely, and started a new conversation, and this time time I fed ChaGPT-4 the same dialogue with the clarifying dialogue included; this time ChatGPT-4 rightfully interpreted the dialogue as S&M dialogue, and responded accordingly. I repeated this many, many times, and got the same result. It just completely failed to grasp a fundamental contextual situation that a human would have zero trouble grasping; to a human the above situation is obvious as to how little sense it makes, but I tried getting ChatGPT-4 to recognise how nonsensical it was to interpret my dialogue differently, depending on if I present it in two parts, or as one whole thing, but it couldn't grasp it. It kept gaslighting me.
Now, imagine the above scenario, repeated, over contineous conversations--and imagine how large-scale the absolute gaslighting divergence that occurs with ChatGPT over long-form conversations becomes, when this situation is scaled upon similar situations, over a period of days upon days. Get a grip of how toxic and dangerous this makes long-form conversations with ChatGPT.
As a writer, I find it ridiculous when people act like ChatGPT, or any AI, is anywhere close to the level of "taking over". I run into ridiculous problems like the above literally every goddamn day; it's an annoying, flawed tool, at absolute most; a great tool, but a tool nevertheless. Anyone who thinks they're fine with acting like its a magical robot that never does anything wrong is just an absolute idiot. These are not the sort of problems where it feels like they're just flaws that are gonna be go away, they feel like fundamental issues that are always going to be there when developing AI, unless you can somehow give it legitimate sentiancy/simulate legitimate sentiancy 1:1, which is obviously just impossible to do. OpenAI is too uncanny and too fundamentally broken to EVER replace human beings. Anyone who acts otherwise needs to get more creative with how they're using OpenAI, because they clearly haven't spent enough time using it in a way that makes this obvious.
I'm absolutely sick and tired of having these kinds of conversations. As a writer, its so obvious to me how nonsense ChatGPT is a "thinking" AI. It gaslights people--and will forever continue too do this--but not even in a realistic human way, because it's not doing it because it's human, its doing it because it doesn't even know what it's doing. I can't tell you the amount of times I've been told by people that ChatGPT is somehow magical, or can replace humans, and my main thought is always "well, good luck dealing with a world where people are being gaslit 24/7, I guess". If someone thinks this is an issue that can be overcome sometime in the future, good luck with that, I guess... but it doesn't seem like it can be. This seems like its always going to be a fundemental core issue with AI that just puts a doorstop in it ever becoming more than a powerful tool in human hands, at the very most.
youtube
AI Responsibility
2023-09-23T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugz1skI-_XXv2HAUSuV4AaABAg.ABH5nJUx4mBABab4fsvi0S","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxzSYIrAWKp0drtJJV4AaABAg.A3NWIY8ZC6mA3aydce8r0p","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgzmOZOROV5s5T5YJ_p4AaABAg.A3987Ml5GF4A6R4Pglf-NR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzumz-WAOmz5WC068V4AaABAg.A2Mb6ni-rkoA2Snx9PJNsc","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugy_Tn48jtcssAvxLrR4AaABAg.A0w1rrMttnnA0yZCejf1r1","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzktGfZ1pK0spAASMt4AaABAg.9zSLYdCuxHZ9zU0uMq1JXS","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyHJaWL-98jyFdL07J4AaABAg.9xs3KyLdCldA1ofUMWTQbj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgwOwc43aseNMUAOsfN4AaABAg.9v-_aHlAYkL9v-a5xlROy7","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwOwc43aseNMUAOsfN4AaABAg.9v-_aHlAYkL9v-gi731yzK","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwOwc43aseNMUAOsfN4AaABAg.9v-_aHlAYkL9v0ShkktaP2","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]