Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m using voice to chat so I hope this comes across in the correct way. I am typ…
ytc_Ugxe6s6ub…
G
Makes sense, automate the warehouse, truck, burger flipper and cashier. Guess a …
ytc_UgwFGJvrf…
G
We shouldn't make robots that can actually feel and think for themseleves. The w…
ytc_UgzNlya-f…
G
When they say with AI there will be no more hard work for men, what that really…
ytc_Ugy99wLr6…
G
Karma is here for the stupidity of the big technology. Good. The greedy are gett…
ytc_UgyanV9eJ…
G
In 10 years, over half of the economy will be run by superintelligence and OpenA…
ytc_UgxOvvkf5…
G
I'm so sick of LLMs. What happened to checking your sources? There have been mul…
ytc_UgzVG6mX2…
G
May want to tweak those algorithms. Five days to fill an order plus another five…
ytc_UgwrYPC1a…
Comment
I have a question — I’m just a teenager, so forgive me if it sounds dumb.
So, if AI becomes powerful enough to threaten human intelligence, then humans might not be in a position to stop it.
There are these things called "AI agents" — they’re different from regular AI, and I think a personalised AI agent that understands these threats could detect them before they happen.
Such an agent could potentially hold complete control over the particular threat, because AI might be better at managing and understanding other AI systems than humans.
Since humans can't always comprehend what happens when "data" interacts with artificial intelligence, maybe AI agents can do that better than we can.
Anyway, that’s enough rambling from me.
youtube
AI Governance
2025-06-18T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyoxxDeAlD3vB1-3u54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx3f--JUh3x247_4xp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz2J89HWMnCKyl0lRV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTrNeqek0Bkedvhnp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4Z147BvY8In4ZEE14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxg6bWqtc3kTg5qsqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxO_ee5wRAekfh1B5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlnZlqMxb14lwEK-t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz1kjdMplqDbH2Kc5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugze9Lib7ntCLuvWjZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]