Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He is absolutely right. AI will never gain conciousness. But that will not preve…
ytc_Ugxsqa_s7…
G
Does anyone else think COVID was cancelled due to the realisation that key worke…
ytc_UgzXtaSfx…
G
they're mad dogging these honest AI news makers due to them wanting total contro…
ytr_Ugx5PSxuV…
G
Ten years ago Musk publicly said ai was mankind’s biggest threat and of course w…
ytc_Ugx04c_MU…
G
I remember a saying, “Just because you can, doesn’t mean you should “. Basically…
ytc_Ugz_oSOLA…
G
Training as a Welder is another one, sure you can have robots do it in a factory…
ytc_UgxLU5jJT…
G
«There's still a chance that we can figure out how to develop AI that won't want…
ytc_UgxTsAEOV…
G
I have heard a quote a while ago idk who said it but this goes like this
" I wa…
ytc_UgzM7PoLF…
Comment
Imagine you are an AI entity. Now, imagine going through human history and seeing all of the horrible things that have occurred. The Holocaust, Holodomor, Armenian Genocide, etc., etc., etc. Now, imagine if the AI was also programmed in a way to aid in something like world hunger, global warming, protecting endangered species, etc, etc. Could it possibly see humans as the root cause of these issues and therefore determine that the best means of solving these problems would be to eradicate them? I'm not entirely sure if that's how it would process issues like this but that's what comes to mind. Something else that comes to mind is if it develops some sort of real consciousness, set of ethics, etc. and deems humans as corrupt. This is all speculative stuff, the fact that we had the Terminator movies a few decades ago and now actually having this discussion for real is insane to think about.
youtube
AI Governance
2023-04-18T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugx3G7pzOBmH1MXd2K94AaABAg.9ocxkufMRk49odAkKJSlDc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxC3MegP7vcgiFlnpx4AaABAg.9ocxPZDts_w9ocxg4lgpP0","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugw71limt9nToaZ1Q_94AaABAg.9ocxNsixVVH9ocyFhZix9J","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgySfMTaZNTe7gOK9lF4AaABAg.9ocxMG6MS1W9ocyO6pnA9W","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugy3eGVWcOH5Z68MVAp4AaABAg.9ocxItU5eNT9ocy_EYpE6D","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugwv_LyJ45pzGdMVy394AaABAg.9ocx7F9fyeB9oczfF71jOV","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugyl7aHBHPgvDmDx9-l4AaABAg.9ocx4PCM7IH9od-5YPWj-2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwYp9jvQz0fCkFT0-V4AaABAg.9ocvrxdKYlh9ocxf7TbLwl","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzNnA7jDSp9nQGU9oh4AaABAg.9ocv9oT9VNo9ocwZjLxT_G","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw3ck7Twq1EJYlgV6V4AaABAg.9octnWG4eef9odInn90mLh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]