Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai is so disgusting and should be stopped IMMEDIATELY. ai should be stopped beca…
ytc_Ugzs-qD-M…
G
I truly believe that once AI reaches a human level of consciousness, the potenti…
ytc_UgwxzGxsP…
G
Are driverless trucks more reliable than driver-operated trucks ??? NO more tir…
ytc_Ugy-v-gb1…
G
Very well written and interesting post! I'm sure a lot of us here are thinking a…
rdc_j4ysl0j
G
We are screwed! This is just the beginning! If a verified and valid ID is not ac…
ytc_UgykgCP4Z…
G
But it isn't intelligent life... ai is destroying career paths, no wonder people…
ytr_UgyQL5TnO…
G
What I know that there was a woman who recorded her screams and put them in thi…
ytc_UgwCySt0W…
G
It seems that a lot of people don't understand the implications of conciousness …
ytc_Ugi0hj0S4…
Comment
His opinion on AGI is ridiculous. You think nobody would want an AI that can do anything a human can do? Really?
I get that, on a personal level, I don't want an AI to do the work I enjoy doing. But that is irrelevant, because nobody will pay me to do what they can make an AGI system do for pennies on the dollar. And that will not be just normal jobs, tech jobs, or other things that are affected already by normal AI. AGI means I could get my own personal equivalent to Neil to directly chat with and produce educational content tailored specifically to my interests.
But, more importantly, AGI means that all AI research can be done by AI. If it is *truly* AGI then this will mean we get thousands or millions of clones of a top AI researcher working 24/7 365 to improve AI. This will then either exponentially improve AI capabilities, or it will at least improve them to the highest achievable superhuman level of ability that is possible.
The ramifications of this are not limited just to a world where AI does all work. If we screw up aligning the AI we could get Skynet, or even a good-willed AI that still decides to painlessly eradicate us just so it can access more resources.
In the meantime we even have to worry about non-general AIs that have super-human abilities. These will be close to being literal genies that may grant your wishes in horribly unexpected ways.
Neil is actively harmful in spreading this kind of uninformed opinion about the concerns over AGI (and AI safety in general).
youtube
AI Moral Status
2025-07-23T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyOCcuYRBa_UP4iEpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyUttYxgTpWbmHuZQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugz1sHZ2rdkyoKze81J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwLWZ0ZLgVABsHXYA14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyot18IMiP92ltSZnR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwUR9uCosZESlNQfaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgwpRrdZswPcFyL4UJF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgwU6RR4WFoMV6ZhAjh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx8TRj9S9k8xSSqMep4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugxc47eHMcOWgv-I62d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]