Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm excited for AI but also caution about them. I'm saving up so much money just…
ytc_UgwsELDHG…
G
Watch the show the 100.. its about the end of the world nuclear shit. And an AI …
ytc_UgxtUi1Pt…
G
I absolutely agree that they should ban facial recognition. It will be a scourge…
rdc_eu6d2ln
G
While Senator Benrnie Sanders and the rest worry about AI wiping out the working…
ytc_Ugx9pLv_0…
G
I have a question. Let's do a thought experiment here for a sec.
So you have AI…
ytc_UgzbDq55S…
G
I stopped computer programming after my first year for this reason. It was extre…
ytc_Ugy5YJetq…
G
AI. Annihilation Imminent
7th world reset started
History will record the peo…
ytc_UgySe2Pg7…
G
If these were humans driving then ego would of gotten in the way. Nothing bad ha…
ytc_Ugxc7PRT0…
Comment
56:08 Edit: Later on ya'll reference Elon saying he didnt want to create terminator but realized he choice was either be a player or be kn the sidelines.
To adress the question of why would we continue to build ai knowing there is at least 20% odds it kills us all, I would like to point out the context of the world we live in. Ai is clearly an arms race, this arms race has every country that can play the game playing. Choosing not to build ai is akin to choosing not to develop the atomic bomb because of the possibility of destroying the world. One person taking the moral stance not to build ai because they fear the possibility of killing everyone does not stop anyone else from choosing to develop ai anyway. 20% risk that is in your hands seems like the better option.
youtube
AI Moral Status
2026-01-27T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxSFCO02UrFuSpjfAZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF4kpWe0I7ZJJbpEx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzLZBUlP_WkIYoHIGZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzi8TOcX6LUuErsqEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIv1-i443K6xz3Rg14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBJBT7wHYm_0MkzVF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1hEJ2nIIydyC3QnV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyXB-fI3s41Zj-aXeJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDITD15vIXo4o9ADR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxjBCORjkNJr1CjiCF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})