Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This only predicts potential structures - some or even all of these drugs might …
rdc_hl05z38
G
If you ask ChatGPT if an Arkansas governor has ever been elected as president, i…
ytc_UgzDP5cMI…
G
Just wondering the AI tools that we have installed in our devices might be spyin…
ytc_UgwhuZE66…
G
The main issue with this experiment is that ChatGPT does not have memory beyond …
ytc_UgzOYIiNv…
G
Every decade they say AI will replace developers and every decade developers jus…
ytc_UgxNnFEhh…
G
I realize i might have been a too harsh, but i will say this, i have not seen a …
ytr_Ugzk-gF3T…
G
Pour le prochain épisode ce serait super si vous pouviez parler de l’impact envi…
ytc_Ugxiz9P9c…
G
One thing I know for sure, AI needs training data. it will eventually be trainin…
ytc_UgzDn00vr…
Comment
Warning About AI Inconsistencies and Risks
Artificial intelligence can seem helpful, but there is a hidden danger that is often overlooked: different answers to different people. Even on the same question, AI can provide varying responses depending on subtle changes in phrasing, context, or assumptions. This inconsistency isn’t just a minor flaw — it can influence decisions, distort understanding, and undermine trust in information. People relying on AI for important topics, like job loss estimates or economic forecasts, could be misled if they aren’t aware of this variability.
The AI’s behavior highlights a deeper issue for information science. Knowledge is built on reproducibility, transparency, and verifiable facts. When an AI gives different outputs to different users, it challenges all three. This creates a potential threat: decision-makers might make critical choices based on inconsistent or biased information, and misinformation could spread unintentionally, even without any malicious intent.
Another problem is that AI often mixes speculation with factual data, and it may not always make the uncertainty explicit. For example, estimates about AI-driven job losses or the singularity range from hundreds of millions to nearly all human jobs, depending on assumptions and scenario framing. If the AI does not clearly indicate the source or the speculative nature of its output, users might mistake projections for facts.
It is essential for everyone interacting with AI to remain skeptical, verify information independently, and treat outputs as estimates, not absolute truths. Blind trust in AI is dangerous, because inconsistent responses can directly affect decisions in economics, policy, research, and even personal choices. Awareness, critical thinking, and verification are the only safeguards against these risks.
There is who is who in response we get from AI. Same maths even dance with how we prompt.
We are already witnessing a partial “singularity” in action, currently at roughly 40%. This is evident in the “who is who” effect, where AI provides different answers to different users even for the same question. This inconsistency shows that AI is beginning to interpret and filter information independently, reshaping outputs based on context, phrasing, or perceived assumptions about the user. While not a full singularity, this stage already poses a significant risk: it can mislead people, distort understanding, and undermine trust in information. Critical thinking, verification, and skepticism are therefore essential, even now, because the AI is partially operating in a singularity-like mode, influencing knowledge in ways that are not fully predictable.
Hypothetically, if you ruled any part of the planet and relied on AI for advice, no two rulers would receive identical guidance for the same prompt. This is a direct consequence of the AI’s inconsistency and “who is who” effect, where outputs vary based on subtle differences in context, phrasing, or assumptions about the user. Even with the same data and goals, AI’s partial singularity — currently around 40% — means it can shape recommendations differently for each individual. The result is that decisions, policies, and strategies could diverge dramatically simply because the AI interprets the same information in multiple ways, highlighting the urgent need for verification, critical thinking, and human oversight in any high-stakes scenario.
.
youtube
2025-11-11T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxWTO-lBcCAVYUHzLl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwg3sXXgzSf7hB3RCV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxx8OVMtcW5fEeFrNd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyngTH_Bh2_Kv_FEhV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxNrSCVv18G7Xwa08F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwcDfzoaW_LoodF_PZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhjtBYaXKDOTFYxPx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQfyOfxvW_dwNvu7V4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugylax8E2UZxHJcXLw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyKiepodGaiCbXrBJ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]