Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Entitlement for simple text promps aside:
AI art is a new tool, and can be amazi…
ytc_UgyRGgauc…
G
i do not agree with this whole AI thing. it's scary and i just want things to st…
ytc_Ugxg7NzZq…
G
Regular humans will disappear replaced by a world society of billionaires and th…
ytc_Ugz4A-MJ_…
G
I'm pretty sure coding has an AI issue too where people only use AI to make code…
ytc_UgzfOlM6A…
G
터미네이터2 기억하나요... 로봇은 인간의 대체품입니다. 결국 그들이 인간을 대신하고 많은 사람들이 사라지게 되겠죠. 최소한의 인간만 남…
ytc_UgzZIDyxF…
G
Nobody that has wealth is going to finance 'Universal Basic Income'. We did not …
ytc_UgyNGPXU6…
G
I don’t have a ChatGPT account, and thus no history of it learning my bias and c…
ytc_Ugx68nBgA…
G
It's just 1 large language model out of thousands. It spits out words based on w…
ytr_UgyT1ouV8…
Comment
In this discussion it sounds as though you are making the AI the problem - directly. I think the Godfather is leaving out a critical point. The AI is programmed by "humans" and if AI becomes conscious, or autonomous, they are still a programmed software/machine. They can be programmed for war or they can be programmed to support "human flourishing". If they are programmed for war, they are more dangerous than our standard weapons, but they are still programmed by humans. It is the humans who control the AI that are the threat. They are the ones that need regulating. But, the governments that do the regulating are the same bodies creating the AI autonomous weapons. What we need is for countries to stop invading one another, killing citizens, and come to a global "arms agreement" about the use of AI in warfare. Good luck with that - but the primary point is that AI is controlled by humans. It's the same old thing: Guns don't kill people, people kill people using guns.
youtube
AI Governance
2025-07-20T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-PHat6WdA82I3fll4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwShqmGArkBDKjanGZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx84QSTjZxwS1vw0Vx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC72JUNJpzR6Qyjnx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugz8CQcIgC8fk06zBcp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwuZTd8cZqTDmFEWVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzd029lCbCTnlNBIZt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwoPyuiixH7Lb5Y9gB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIW95_zGm9w17Rj9t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzp32OH0MlOLh7WlOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]