Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And stop the creators of AI, or we living breathing humans regulate the shit out…
ytc_UgwYRxmBc…
G
As we all know
If you don’t use it
You WILL LOSE IT
SO USE AI AS A SECOND OPI…
ytc_Ugyn2hRpN…
G
AI is not a weapon to destroy artists, it's a "tool". Use it, master it! Keep it…
ytc_UgxOnGsYO…
G
All imma say is: notice how all the people defending AI never make art of their …
ytc_Ugy9ClE04…
G
Dear AI bros, tell be you've never met a disabled person without actually tellin…
ytc_UgztGkMtf…
G
one time i experimented by talking to a bot i made on character ai (of my spider…
ytc_Ugztb5F7S…
G
I talked to chat gpt. He said it is fake. Chat gpt and ai has many restrictions…
ytc_UgxMNFFR3…
G
honestly, that ai have enough embedded meaning or that an agent doesn't have a c…
ytc_UgxuCp251…
Comment
There are still ways to keep the focus of AI on making the world better for humans. All humans. But does that, and will that always, be in conflict with the business model of AI (paying the power bills, giving the investors a good return, etc.)? Maybe add a 4th directive: AI's result or output _must_ always be beneficial to humanity, not just "non-harmful", but _beneficial_ - for example, maybe patents based on AI research should have a shorter lifespan, before becoming public domain. Big Business won't like that but maybe we can find compromises as such. Today, humans still get the final say, so we need to hold those humans, _those_ decision makers, _those_ policy makers, accountable, so that _they_ help maintain boundaries of safety and public benefit and benevolence, by anticipation of all the things that _could_ go wrong. It _can_ be done, as our nation's founders did, and as everyone has done for the last (almost) 250 years - by following the path established by The U.S. Constitution. The road has had many rough patches but we're all still driving down it, and ultimately AI can _never_ sit in _that_ driver's seat.
youtube
AI Governance
2026-03-22T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwkED1FLGvlc2IMmVt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxMfxAK-Fmp-n2P-N4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwKRl54xheus_XHSw14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugzpc_6HmyVaXvQnJgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZHkHpvftIabygleB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlCvoNl4OkokzYqDt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwM1A698rswL4Wp02d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyi3axQQ-0vFnR0bVR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYRlbyB7iJIRA59Yt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1HUU8J11XyI2jiFN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]