Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was in similar situation- boring non-complex backend development, ~20hrs / wee…
rdc_moeucpc
G
5:00 I tend to disagree, if you have a job that currently consumes 60hrs weekly …
ytc_Ugz2GMC2c…
G
still the realm of science fiction. the hardware, power, and cooling requirement…
ytc_Ugy3iWUKu…
G
It's kinda wild to see a robot typing on a computer. Like, I get that it's a met…
ytc_UgwI0pLyI…
G
I think the solution is to start off with an mandated algorithm that that minimi…
ytc_UgweUjyiP…
G
I think this is a step too far for me. I get that AI art bad and everything, but…
ytc_UgxJivEqV…
G
After I spend around a day trying to get AI to generate me a semi-passable wallp…
ytc_UgxA0HY_a…
G
Dont waste your time listening or believing any of this bollocks.
AI hasnt got …
ytc_Ugwqo0Im_…
Comment
The better news is that those tipping point calculations are built upon "all other things staying the same" between now and the mass impacts playing out, yet we are already devising new understandings of and new means of carbon capture.
I'm not ignoring nor trying to downplay the issue, but the reality is that our near-term world is being redefined by AI where pretty much NOTHING will stay the same for very long.
Under even moderate-to-extreme climate models, where the human population has already begun experiencing significant impacts today, severe large-scale starvation and mortality aren't projected to escalate sharply until after 2050.
That's 24 years from now, literally an ETERNITY in AI progress for the world.
I've spend my life following computer progress, was in charge of all PC support at Kwajalein Missile Range for 5 years, and spend 10 hours weekly keeping up with all AI developments today.
Even the next 10 years will see change the most people can't even begin to fathom.
Today, in my opinion, climate change may barely make the top 10 existential dangers if it's that far out.
youtube
2026-02-17T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugz0qMRIS1EDC_U9thR4AaABAg.9q91gcs1jfc9qDvYu3iBkI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxdkPN7eKd-xfGS-v94AaABAg.9q90C47Nr3m9q9EN_Ypemt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxdkPN7eKd-xfGS-v94AaABAg.9q90C47Nr3m9q9IgIAaco7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxdkPN7eKd-xfGS-v94AaABAg.9q90C47Nr3m9q9PvQBBcBx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugywj0NezAwPEYbPHZl4AaABAg.ATIUP7DQ_BIATJq8Em17c3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugy1rdkptAkgQ4RJbEB4AaABAg.AT0YZf900nyATucYzoGOac","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyBQH0AWVAhunqIxRN4AaABAg.AQdLYbVpjcFASV9MMyT7XP","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyECf_nx8gTMVu6iJN4AaABAg.AMy4TenM1bQANxSfFNdym_","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwVsOn1fQ1iyNtF80Z4AaABAg.AIpbMYFXanlAIxschIvxtl","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwVsOn1fQ1iyNtF80Z4AaABAg.AIpbMYFXanlAKuNyPourEM","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]