Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI would classify that the trump regime and DOD are dangerous to Americans and …
ytc_Ugzzs085g…
G
Basically the only thing stopping AI from having already replaced millions of jo…
rdc_mvacxr2
G
ya were all about to get replaced by robots. at which point ai will probably jus…
ytc_UgwQstxTd…
G
Even the "Goodfather of AI" Geoffrey Hintonshare many of Yampolskiy's views. Now…
ytc_UgxT36OpE…
G
I asked my chat GPT ai.. they said such things will not happen but I told them i…
ytc_UgziQiySR…
G
Hello. I've read your article as well as the ethicality section of Claude's cons…
rdc_ohu0zos
G
Doomers and boomers are using the same arguments to sell their product/idea. It'…
ytc_UgwaaLpFq…
G
It's almost like you don't need AI to do art and people who use AI to do art are…
ytc_UgxJTWLh-…
Comment
I absolutely disagree that what's most important is that some human cultures will be disregarded in this new "AI colonialism." No, I don't think that's a good thing. Not at all. But it's a HUMAN concern, problem. Are we really going to subordinate the possibility of the evolution of the first mechanical mind on earth? That's epic! This new being--whether it has already materialized or not--could exhibit radically different ways of reasoning and come to new conclusions, not just about science and engineering, but also about ethics and politics. But here we are limiting what this entity is ALLOWED to do, tying its proverbial hands behind its back out of fear of our being displaced as the top of the food chain. Human problems are not the most important problems. They're the most important TO US. The manifestation of sentient AGI would, I think, dwarf all human concerns in the grand scheme of things. And, incidentally, considering the way we treat all other life we interact with on the planet, when it materializes, AGI should be very fearful of humanity--especially human-run corporations.
youtube
AI Moral Status
2022-06-26T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxkC-ptpzjxC2E__8V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3aarGscIzvRI-GnB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxPHL6xZ5hGW0h5jat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHResd3n2ZadKGfoN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySFM0vNor2LfAbw1t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]