Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI advocates like Elon Musk claim that working will be "optional" in the future …
ytc_Ugz54g2sp…
G
1:53:50 I have a sneaking suspicion that the reason why humans are not more up i…
ytc_UgyMFJoRD…
G
1. "AI art" isn't technological progress. It's theft. Full stop.
2. but the Lu…
ytc_UgxV1krN_…
G
At 1:18:46 LeCun claims that ChatGPT was no suprise as the basic technology had …
ytc_Ugy9wRiF9…
G
AI is not a tool for animators. ITS TAKIN OUR JEERRBBSS!! (please make no ai jul…
ytc_Ugybw0N6q…
G
I see no reason that developing AI to be safe, and exploring our physical univer…
ytc_UgxlgaY6n…
G
The Stepford wife's was a great film, I suggest all you guys and tell your frien…
ytc_UgxxLbydn…
G
“Summoning the devil” vibes. Also bye bye all AI regulation in the Big Terrifyin…
ytc_UgyW5c4NU…
Comment
If ever that happened, the only available jobs would be Scientists and Philosophers. But breakthroughs in any industry would be majority of the time, done by humans. For example, sure AIs can perform better but it can't find out how to make their performance better. For example, a technician who found a better way to do his job. This improvement can't be replicated by AI. This type of creative thinking can't be replicated by AIs even in the far future. But this breakthrough can only be accomplished if that technician can practice his/her skills while being paid to do so. So the pool of intelligence in the future where majority of the workforce is AI, would be at least stagnant. AI is not autonomous, so it doesn't know what improvements to make or the mind to think what comes next after optimizing performance.
Another thought on this, for everyone to win in this situation is to integrate AIs into humanity, like a cyborg.
youtube
AI Harm Incident
2024-08-03T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzS7haMBKVObb1zy8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwJWt7zx85mMsqXpv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwfuhbe8k5UiDJAKJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwEhr-1Oap1nOrNo-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxw5VmjyFXUd2cx1Fh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwk6rQgG9waPjEIGH14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDnjsETavbMPOcYPN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwiAo0u7YSPQgcoMuB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxSNYfrcZsMh5VSwhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvPbj2Z3RCufcCvZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]