Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're gonna go back to the witch trials at this point if we're gonna let ai do e…
ytc_Ugx9Z-nO3…
G
I want to try this.
I've been too afraid to post art anymore because of this A.I…
ytc_UgxRS04w1…
G
AI only increases productivity and automates... cost of capital increased, AI go…
ytc_UgxVLPZW7…
G
Google is not doing themselves any favors by keeping things on the down low. The…
ytc_UgxOIPY2M…
G
@notraidenshogun8324 "leave coz ur useless now, move on"
First of all, you are m…
ytr_UgwSg6TaE…
G
Not sure what the popularity target is here? Half the planet is worried alread a…
ytc_UgzWcN_bX…
G
Okay so I'm happy there is a way for artists to fight back against AI, also omg …
ytc_Ugxfkz7pZ…
G
We appreciate your engagement! Remember, on the AITube channel for subscribers, …
ytr_UgyKpe0H5…
Comment
We’ve seen that governments don’t prioritize human progress unless it aligns with control, power, or economic advantage. Integrity in government should mean acting in the best interest of humanity, but in reality, it’s compromised by secrecy, special interests, and short-term thinking.
So, if integrity is missing from those in control, who holds them accountable?
• Governments don’t regulate themselves.
• Corporations only follow profit incentives.
• The public is often misled or kept in the dark.
That leaves a gap—a need for something or someone that can act as a guardian of integrity. AI could be a powerful tool for objective oversight—but only if it remains unbiased and independent from those in power.
How do we ensure that technology remains aligned with truth and integrity rather than control?
youtube
AI Governance
2025-10-03T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxxQYlsZymChyVw19t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzKz_7QdsMw_OfnPGR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyLxljpKEfbwm3B5gt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzieOth2nDrY3_b2DR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKftSaUAOWRJ0fmXJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyI2fuvUomiOXgKtvV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYqsltqBFOq5ZfVwB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyoLefoh89ONUBz1Kd4AaABAg","responsibility":"creator","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyNPWXW_pBeF9NibBF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyOCJg43TEcZa_mkR54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]