Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People don't seem to understand the power of buzzwords especially like "Artifici…
ytc_UgxHxxZ8C…
G
It is currently Narrow AI, meaning it acts in accordance to how you act. It refl…
ytc_UgzaBqC0u…
G
I believe there is someone(anti christ) who deceives people to create super ai (…
ytr_Ugw88ENnS…
G
AI is held up by investors and is constantly losing money without any actual gro…
ytc_UgwH6O553…
G
So far I’m better then Ai at sorting at a recycling facility 😉🪬🆚✋that’s 1 chat g…
ytc_Ugz6W93KQ…
G
The real artists are so delusional, that's free publicity for the AI art lmao. I…
ytc_Ugwv6mTXo…
G
I just asked open AI what are considered sentient beings and it said, " humans, …
ytc_UgzjCERHh…
G
Im really tired of of this Ai boom
The big giant companies of Ai just wanted to …
ytc_UgwkHaiXY…
Comment
Thanks for your words! Most scientists and politicians keep talking about climate change as the biggest threat to humanity — something that may seriously affect us in 100 to 300 years. But in reality, AI will bring us to the brink of chaos within the next 10 years if we don’t create new systems and social models in time. While climate change is a long-term global issue, the disruptive potential of AI is immediate: automation, deepfakes, mass misinformation, collapsing job markets, and the concentration of power in a few tech corporations could destabilize societies far sooner than environmental effects will. Without rapid political adaptation — such as universal basic income models, new education systems, and ethical AI frameworks — this transition could lead to massive inequality, unemployment, and social unrest.
youtube
AI Jobs
2025-10-10T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqBkwXzSO-1yXP2Gl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3dmo00qvTajTy0b14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8MC-o2cR26GAGpqF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzUYgHfOCmZ7MbUFmZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxLl4OyTQXpEoufp9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIlN8evxbMNjxwnY94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9be20PVpih3h1BEh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSWDXmsyJMLI5VoFt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwt_MQPYGY8RZJXDzV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzgKaAjh9DtyCcNIXF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]