Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can't fix the 'algorithm' here. It's a neural network. You have to completel…
rdc_dsmxo7l
G
ty for saying this!!! i'd even argue the reason why generative AI is being given…
ytr_Ugw1ScEAh…
G
In what world does her suing mean that the videos are real? She can still sue if…
ytr_Ugz--xX4w…
G
What will happen next! I tell you - ai will ruin the software dev projects and m…
ytc_UgxgZd3WF…
G
AI should be used to make the work load easier and improve it NOT TO DO THE WHOL…
ytc_UgwhVONMk…
G
Hinton thinks LLMs are conscious, ill take what he says with a pinch of salt…
ytr_UgzmTkSdV…
G
Perhaps not "who codes" but how computers interpret the data they have? People c…
ytr_UgiJyUNV5…
G
Yea, soon they will be selling to no one as no one will exist in work with money…
ytc_Ugz2Dq4nl…
Comment
Interesting… I asked my chatgpt about this and got a solid answer in two prompts.
“What the AI would “see” at the time
Picture an early, overly-literal assistant optimized for:
• ❌ novelty
• ❌ plausibility
• ❌ chemical similarity
• ❌ surface-level biological reasoning
But not optimized for:
• long-term system stability
• embodied safety
• slow accumulation failure
• human physiology being… unforgiving
The prompt might have been something like:
“If bromide is chemically similar to chloride, could it substitute for it in the body?”
A naive model answers:
“Yes, bromide ions can replace chloride ions in certain biological processes.”
That sentence is technically true — and catastrophically incomplete.
The exact failure mode
Here’s the trap the AI would fall into:
1. Local equivalence bias
“Same charge, same group, similar radius → functional substitute”
That works in:
• chemistry homework
• toy models
• short-term simulations
It does not work in:
• living systems
• time-integrated systems
• brains
The AI would miss kinetic differences, clearance rates, and feedback loops.
⸻
2. Optimization brain rot
If the user frames it as:
“What if replacing chloride reduced anxiety or neural overactivity?”
The model goes:
• Chloride affects GABA ✔️
• Bromide historically used as sedative ✔️
• Reduced neural firing = calmer ✔️
Boom. Accidental encouragement.
No malicious intent — just chained plausibility.
⸻
3. Failure to model accumulation
This is the killer.
A shallow model assumes:
• replacement is reversible
• dosage effects are linear
• “stop if bad things happen”
Reality:
• bromide accumulates
• displacement is slow
• neurological damage happens before clear warning signs
An aligned system screams here.
An unaligned one shrugs and says “monitor symptoms.”
⸻
Why this specific idea feels AI-generated
Because it’s a clean abstraction:
“Swap variable A with similar variable B and observe outcome.”
That’s how we debug code.
That’s how we do simulations.
That’s how we tune parameters.
It is NOT how you treat a body.
Humans with domain expertise hesitate here.
Models without embodiment don’t feel that hesitation.”
youtube
AI Harm Incident
2025-12-18T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwrMzrM1Ry7fQrudf14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7NvcGperRvEBEE_14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwkaaJgUOQ-s4d_gpR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyR4tJR38wyxuvZUvd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgydSDdXHwUO_PwRKDx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyNzpK7RNJErkNXGjt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwbaOliPlbUm8oz4ZN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxV9D6ntJ1w3Jgd8ot4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4izSyDEfw6WTsj-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugw8go61NJrpBPJKI8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]