Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The way forward
If we want to avoid the collapse scenario — the “real chaos” — …
ytc_UgxDDR34j…
G
We appreciate your observation. In this context, Sophia, the AI robot, is design…
ytr_UgyJp_dNh…
G
THIS IS A VERY VERY BAD IDEA!!!
I’ll Thumbs down every video out there, having…
ytc_UgxKcBnoj…
G
My problem with the "philosophers will figure out whether this qualifies as 'rea…
ytc_UgwhJHE0X…
G
I once heard someone join in a similar discussion with the phrase 'free art for …
ytc_UgxQ_JH_M…
G
I'm so prepped for AI rights. I watched Cloud Atlas and I was like machines are …
ytc_UgiLNSy2w…
G
Fuck yeah. This fuels my faith in/dreams for a reparative future that I can be a…
rdc_dsbcjhu
G
Excellent! I don't believe upon my eyes. I never thought this Era is come in my …
ytc_UgySk3DWa…
Comment
A short while ago I talked to ChatGPT about two opposing views: The Kurt Vonnegut/Margaret Atwood view placing humanity firmly on the »we are all f…« side of things and the Mario Bunge/Carl Sagan view. The latter pair would likely have believed that, while it is not clear cut, we have everything at our disposal (e.g. science) to »muddle through« and simply survive.
I then asked the GPT to place his bets and it did so in a surprisingly thoughtful way. It identified five major risk clusters: Climate overshoot, nuclear war, unaligned AI, pandemics & bio-engineering, and entrenched poverty/inequality.
Weighing in the odds, ChatGPT gave these scenario estimates for 2100: Decent future (45%), turbulent muddle-through (40), major catastrophe (15%). So, we might still make it. 🙃
youtube
AI Governance
2025-06-25T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzjHhZP_sVQ-HsYUUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxieAyXjJpKA1Lvk-B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSLHwoPLGzsBwBM254AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7YRLsoRkibqPlxKN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBpFJmWdO9-A2-poJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyDhR7t9dqJu1Mtuc94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgztSgqqDkDX4QMI4td4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgDGyAc-KhT0sONct4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz6I7ZG3kfd772bK7l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgysNaICClKUlbKPkxR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]