Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If “artificial intelligence” had been truly intelligent, it would have replied: …
ytc_Ugy2euNEE…
G
David unfortunately is naive. Open Ai was a non-profit not that long ago. Those …
ytc_UgyW-6YdC…
G
I always thought that there'd be a new generation of people entering my industry…
ytc_UgwSj8ht6…
G
Open Source AI I feel becomes so valuable at that point. Something that becomes …
ytr_UgyOyDvdL…
G
But if the AI companies will replace almost if not all their employees and human…
ytc_UgxdZ6obi…
G
The only way I can see self driving vehicles ever being safe, is 1.) If all vehi…
ytc_Ugwxsi4Lv…
G
I feel like I'm the multi chatter since I have multiple bots and I'm on characte…
ytc_UgwOaiZA7…
G
You’re wildly underestimating the scope of impact and influence Meta has had in …
rdc_kojzoif
Comment
Most likely the elites developing this and the elites financing this together with the other elites that will and have access to this, will try to use this for global domination. One elite either will either succeed or just there will be a new world order of competing elites. The most intelligent approach is integration of AI with the human brain, because it solves alignment and it also solves the danger of our intelligence being vastly outmatched, since we would be the AI and the Human at the same time.
youtube
AI Governance
2023-07-07T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmzolVADOl15o-OhF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxJ11phjf7nWYwjDVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZZ0uyt1yfo5EgESV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz6wifCku4Z8pgeu_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx3znI4ac2gaOe2Zh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxvrOAKCimoTfn-JBJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgznEDQltl08h_r6oP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzPYmvO49luflp5DU94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxiEaPtU4IieNv9ndJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNrYe20DwF2Kbm6qp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]