Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@icantevendothjsanymrke 1- you are fighting a fight already lost. there are open…
ytr_Ugy4UpYqz…
G
What the powers to be (folks that think they are in power) don't realize is that…
ytc_Ugwh_km43…
G
Any body remember the AI they created that hooked up to Twitter and became so fu…
ytc_UgzBmHj2H…
G
The problem with AI-Stans is that they primarily view art as something that is n…
ytc_UgxE58YGC…
G
the real risk is that 90% of the population just accepts AI is right 80% of the …
ytr_UgwV0YMNf…
G
I see what you did there using the A.I's answers in different topics to outplay …
ytc_Ugy2xDfv8…
G
They are saying AI would replace the entire work force in 10 years. Are we sure …
ytr_Ugx5HWdW3…
G
I'm quite sure I sat thru this interview 4 months ago. I'm less sure if I made …
ytc_Ugzo4kLJL…
Comment
Just like Clark Kent, who embodies the potential for both good and evil, AI has the capability to be a powerful tool for positive change or harmful outcomes, depending on how humans choose to develop and implement it. The responsibility lies with society to guide AI's evolution, ensuring it's used ethically and for the benefit of all. This highlights the importance of establishing robust ethical guidelines and accountability in AI development, so we can harness its potential for good while minimizing risks. Ultimately, the conversation around AI should focus on responsible stewardship rather than portraying the technology itself as inherently good or bad.
youtube
AI Governance
2025-06-18T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyoxxDeAlD3vB1-3u54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx3f--JUh3x247_4xp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz2J89HWMnCKyl0lRV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTrNeqek0Bkedvhnp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4Z147BvY8In4ZEE14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxg6bWqtc3kTg5qsqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxO_ee5wRAekfh1B5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlnZlqMxb14lwEK-t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz1kjdMplqDbH2Kc5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugze9Lib7ntCLuvWjZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]