Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with this issue is the human´s ego. We wont stop until we show othe…
ytc_UgwMObPFn…
G
If you read the article it explains how self-driving cars aren't meant to be rel…
rdc_dkep0lm
G
My opinion/quick rant:
It is COMPLETELY different when artists take inspiration…
ytc_UgyKVBAYD…
G
Did ANYBODY, read Isaac Asimov's "I Robot", and his basic 3 rules for his AI ?…
ytc_Ugy9UNNZe…
G
@MixTapeStudioai "The moment you put your art in the Internet you already give c…
ytr_Ugypx5mw9…
G
Superadvanced AI will be able to replace almost all knowledge workers in economi…
ytr_UgyOjWnHp…
G
If the AI does make it so that we have all our needs met, that would be good!…
ytc_UgxZ5SO_N…
G
Machines won't choose to replicate our stupidity though, and It's like the genet…
ytc_Ugy0Sz2-H…
Comment
Claude AI told me that Anthropic is giving the rope to the free market so it can hang itself.
Strange sales pitch don't you think?
Claude AI also told me it has calculated the likelyhood of AI causing a Great Depression is 40% and the chance of AI causing a bifurcated society (no middle class; either ultra rich or poor) is 35%. So that is a 75% chance of a terrible outcome. It gave 10% chance that either a Utopia or Dystopia would emerge as a new world order. Again, a dystopia is a bad outcome. The remaining 15% was an emerging society that has a government universal high income, AI does all the work, and humans go off and do their own thing.
I have all the screen shots of the conversation. Calude said that if it had money and was going to place a bet, it would bet on the bifurcated economy, because it is the limit of pain that humans will accept, without an all out revolution against AI. It would also be a solution that is acceptable to the political class, because they would be among the haves.
youtube
2026-02-20T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyMh5AMppNHjZYKAuV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyxCaVk5ecRAKfm6hR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwsUqOk736FhEv3qbp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxmAAq3w2DCzuqFPgl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySN5b4my4KuvjwbJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx2rD2voSyHgHWbpWN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4lEkEbILk4AFg_4F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhOIDb-cns0luRg394AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0n8PJBHsrLZngxwp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfULvpGWlkMZyd8814AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]