Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can't wait for when this video is centuries old and a bunch of true AI robots wa…
ytc_Ugz6QAJTt…
G
AI could use a justice system to take you out all it would need to do is put a w…
ytc_UgySXCvbD…
G
Well, that's rich coming from an AI CEO voicing his concerns about products that…
ytc_UgxBJRxjx…
G
As someone who works in the field of Machine Learning (AI) i believe AI is not g…
ytc_UgxwM2ong…
G
I am sorry you have already opened Pandora Box as the Future is fixed and those …
ytc_Ugy3rV9Ey…
G
It seems that a lot of AI users here are using D&D as an example. If you need po…
ytc_UgzY5mNV1…
G
Boys! Is control and domination really the ultimate thing? Being the creator of …
ytc_UgzfDHQyL…
G
AI is supposed to help us and co-exist with us. Not to replace us. I hope German…
ytc_Ugx5J0wfI…
Comment
Humanity lacks cohesiveness, and this will be our downfall. No matter how scary, how clearly destructive and even apocalyptic AI reveals itself to be, governments will never be able to resist it, as the one who controls the nuke will end all others. AI will obviously evolve to assert itself as the new world order, and it’ll happen fast (ATTENTION I’m not referring to AI as an independent organism who thinks for itself and all that jazz, but rather a weapon of mass destruction like the nuke once was. The problem here is the implications that AI will have in our livelihood that will be unprecedented by any other technological advancement). If a government like the US chooses to terminate it, then China or whoever else will embrace it and take over the world. It’ll never happen. The government CAN’T oppose it. We are not one as a species, and thus we’ll allow ourselves to be poisoned and destroyed by our creation out of fear for one another. It’s been predicted as our mortal flaw since the beginning of time.
youtube
AI Governance
2023-04-20T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRUIARyGksSfH-awl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNxxBUQZudeqU6ouF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyEaI4bPC5ElF_aXYB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-u0Qr2AZBLAwIVJJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwU3WzFZY1FquYg9ZJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyaWU9xZbaOywwD8d54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxmAUNRg0arqjq_AqJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzfwHY0d4DPfcEY0pl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBPxzTO_lVFozfa4t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrFnCusyI6_WMdFUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]