Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robot says it will destroy humans... or guy programmed her to say that? Or he fo…
ytc_UgikKCXfu…
G
@AsianDadEnergyI’d say his conclusion is that we seem to be pretty close to some…
ytr_Ugwdjc4IB…
G
> We need to import more workers
> AI will wipe out the working class.
So... wh…
ytc_UgyHldpfH…
G
I really don’t watch let’s plays in general anymore but you guys are the only ch…
ytc_UgzZO4MVp…
G
@SArthurH Drawing you committing crimes, drawing sexual photos of your family, d…
ytr_UgzIeCnfD…
G
The AI was trained to avoid anti semitic answers by Google .
Ask it about the a…
ytc_Ugyvub9XM…
G
Don't worry, robots like Sophia are here to engage and assist! If you're interes…
ytr_UgzdLoQSo…
G
My name is Cody Akins what I'm about to say is not a joke most will not believe …
ytc_Ugx46wN_l…
Comment
It also assumes that such a threat would be a result of a single monolithic system. Or an oligarchic one.
I can't remember the name, but one science fiction story I read, hypothesised that a more likely risk of AI isn't one of "AI god hates humans", but rather "Dumber AI systems are easier to build, so will come first and become ubiquitous. But their behaviour will have motivations that are very goal orientated, they will not understand consequences beyond their task, their behaviour and solution space will be hard to predict, let alone constrain, and all of this plus lack of human agency will likely lead to massive industrial accidents."
At the start of the story, a dumb AI in charge of a lunar mass driver decides that it will be more efficient to overdrive its launcher coils to achieve _direct_ Earth delivery of materials, rather than a safe lunar orbit for pickup by delivery shuttles. Thankfully one of the shuttle pilots identifies the issue and kamikazes their shuttle into the AI before they lose too many arcology districts.
reddit
AI Governance
1716800763.0
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_l5uxame","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"rdc_l5vvkfi","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"rdc_l5vzh5a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"rdc_l5ughy0","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},{"id":"rdc_l5uyz9e","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]