Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The idea that universal basic income (UBI) will improve quality of life and enab…
ytc_UgxMUvj-2…
G
Well its not AI if it doesn't think for itself with own views and opinions, fuk …
ytc_UgzsOW_U7…
G
AI is giving power to every human, not only to Kings and Politicians. They don't…
ytr_UgwAwfcKc…
G
You know what would be hilarious? If we’re at the event horizon of which some st…
ytc_UgyFnLjzo…
G
Right but it's still a grey line in some cases. I have come across lewd anime dr…
rdc_lu6j07b
G
Have you heard of the AI experiment where researchers posed as a fossil fuels co…
ytc_UgxtIqZSD…
G
@raulavila-t5u he provides no good evidence for being correct. Its basically ''i…
ytr_Ugzb_QAjK…
G
BAFFLING how people are still like "you can't stop AI" in the comment section li…
ytc_UgxtGzlT0…
Comment
We don't need to imagine what the future might be like. It's here now being created and for humans we have an increasingly divided society with growing inequality. Humans have not evolved emotionally much beyond apes but we have created incredibly sophisticated technology which is now evolving super fast. We know from past and current human behaviour we aren't able to stop things because of fear someone else won't. We know that while we can see some immediate threats we struggle to see systemic threats. We also know that when we try to get round something we create another set of problems. So for example we hope to stop climate change, we imagine having renewable energy but in fact we create things that require yet more energy. There are many examples we can examine but for the most part the indications are not good. It is possible that humans may willingly allow Super AI to take over because we see now that humans allow this to happen in social media with basic AI. I see two possible futures, one is very concerning, the other requires a complete re-think.
youtube
Cross-Cultural
2025-10-06T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyiZf0_fDK7Wny5vjp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgtjJKXja-l0DmO9p4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyO3Dzg-4VmNDqhtaF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyybmZU9NtAGGyrw8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxdrmOPk9Z6xEJzlpF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuJQvbMKIZ5EuopjR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx30SqMLdMF5CnjWnl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIegTqNvrcwjHbns94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzl-TE7uwiWEa9OYP94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugze4OW7_to-EYlT1wR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]