Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a user of chatgpt. The AI converse with the user based on previous discussion…
ytc_UgyaXqzs7…
G
Naturally Bill Gates would commend AIs ability to write poetry or art. Whereas t…
ytc_Ugyhd35YP…
G
So nice to see the woke idiots face the music. Àn AI writing movies and TV would…
ytc_UgziJDSIr…
G
Will you work for free? No. Do companies want to have cheaper if not free employ…
ytr_UgwbjIZy4…
G
most likely she could do nothing people who run deepfake porn websites are hacke…
ytc_Ugx8U0GAG…
G
Honestly if I could choose between a low-skill illegal immigrant or automation, …
ytc_UgxJdSF-N…
G
I agree, I did an AI interview and it was completely dehumanizing. Whatever I sa…
rdc_n6rgc0a
G
They can't create it bcuz they don't know the truth of what they are. Just like …
ytr_UgzPr2JKy…
Comment
My takeaway of this is that AI is the manifestation of human intelligence. And if the people at the forefront of AI today don’t put any limits on what AI becomes, then it will embody human greed and avarice. Which could mean the destruction of us. But with insightful regulations built into AI, it could turn out beneficial for humanity. Maybe, this is just anthropomorphism but perhaps AI really is the machine analogue of human consciousness. So if we allow ourselves to create our own downfall because of greed and hunger for power, AI will just grow into that itself. But if we impose altruistic expectations for ourselves, our society, business and government, etc, then AI will have similar outcomes.
youtube
AI Governance
2025-07-05T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzk8j5NflQsMGocXTN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzI7b_hCl58xjLLFh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGdG8X3PZia4ofNTN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpKEogLVafHIYdHlh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTeAMm4J0mBG8-4Hl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgypkE_qJR0lnc-BXox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0-pMDVARyJF8AAAB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHbBuRwbFm0i4Hw4B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyCCQyfC-NxONrwIN14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwmPeaXit_X-PkQj-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]