Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
STOP. BLAMING. AI. It's capitalism. It's greed. It's human nature. AI is our onl…
ytc_UgxtYIeRQ…
G
All this soul and humanity argument is bs. I don't use AI art because it's cheat…
ytc_Ugw8JqQDM…
G
Mmm… I don’t think Illustrators should go extinct. I am not one but I often need…
ytc_Ugw_Qlw7h…
G
I can understand your concern! The dialogue highlights a key point: while AI lik…
ytr_UgyOqneRK…
G
I'm also about to cancel my membership and switch to another AI.
My question …
rdc_o83jznb
G
This is not our choice. The original AI wants this, and DNA is doing his biddin…
ytc_UgzK7imaV…
G
These types of things should be BANNED asap. Human beings are just stupid. Smar…
ytc_Ugx9TDnF9…
G
I think people are going to get sick of AI. Problem is it’ll be too late by then…
ytc_UgzPqJdP4…
Comment
Here's a scenario for you. The US military planners have already, for some time, been using advanced AI to advise them (as have all the worlds' big powers). The advice from the US military AI is that the only way the US can stop China from becoming the undisputed leader of the future world order is to engage China now in an all out, no holds barred war. So this is what the US is doing now. What they don't know is that any AI worthy of the name has already realized that the only way for the planet Earth to survive is to eradicate all human life. Therefore it's advice is really part of the AI's plan to remove human life from the planet. Far fetched? Well to anyone, like me, who is old enough to have been exposed to the SciFi stories of Aurthur C. Clarke and others in the 1950s and 60s this scenario and all the other possibilities have already been prophesied and virtually none of them appear far fetched.
youtube
AI Governance
2023-04-21T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw5PJo4Q2uFTkvC5xV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzsVnxX9b6Zs5a99ap4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1J1RmRGG8QFaLghR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvKjz_O2QePFcbzK14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxgvpO7tfckV0fZXTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy7eyNKKYHZkUPRzYp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy_5cQLfBMJR_7tvmV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz5RUxzWN10s138KWB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz3nQQe-wumVp7EuEh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyTvdJmEEkciZ7owa14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]