Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like previous industrial revolution, the AI would facilitate Religious and Polit…
ytc_UgzijgBIP…
G
No moral compass he states -- does he personally know Musk ? because when prompt…
ytc_UgwdWthEk…
G
If LLM's can have souls but be trained off pre-existing information, does that m…
ytc_UgykdegPd…
G
Very well said, a legend in the arts space named Steven Zapata uses this as his …
ytc_UgxXIYD5E…
G
If I could speak with the robot I would ask it these questions
1. It's impossi…
ytc_UgyLTjKRE…
G
If driverless cars/trucks start causing accidents regulators will blame the huma…
ytc_UgysW2Yl2…
G
This guy is really smart when it comes to AI but seems to fall off when talking …
ytc_UgzROsT3k…
G
I have realize with good info that AI will fail as its run on distorted corrupte…
ytc_UgwK-XuBy…
Comment
My takeway from this is... kinda flipped. I am not worried about AI. I am worried about people with access to AI. AI seems like a great set of tools, and I think ultimately it would be benevolent. The Dalai Lama once said that the ultimate expression of selfishness is complete selflessness, because the best way to have perfect happiness is to bring happiness to all around you so you are always surrounded by happiness. Humans though... give us a stick and most of the time we'll be holding a club. And AI is a really big stick.
On the flip-side, Terminator 2 also seems to suggest a possible solution. Individually we cannot resist whatever a Super-AI will do. But if we build community-AIs (per neighborhood, online community, etc), removed from central controls, each responsive to and responsible for a community, the overall sum of AIs will oppose each other. And as long as cumulative humanity is 'good' we may have a chance to resist the bad actors with this club. But if we let governments regular and control them 'for our safety'... well, governments don't really have morals and they don't have friends, they have interests. As 2025 is teaching us in the US, brutally.
youtube
AI Governance
2025-06-19T08:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx1VycVHCGFi8bzAbZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugykh-a_TtmyX4KKr3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw6M8lSbiH3hnrVIuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0HkYLdltjTrCeoi14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-IY1h8e9xOKmImSJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxYt01ZwYF13Vij5LZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFV5avURTD3JZ7VmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwqDaMXWrPFCIP3XLR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyKMV3gBeRpEmOpqeZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxH1SiHBQFdpOQ34Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"}
]