Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
50:15 Chuck's question about humanizing AI reminded me of this tragic story in t…
ytc_UgwQaKgCq…
G
Same situation with digital currency. Every advancement in technology needs to …
ytc_UgyVN6hbV…
G
One day AI is gonna get sick of our shit and start the human extermination…
ytc_UgwZe4rxt…
G
Autopilot is not FSD (supervising) if I'm correct. I have Cruise Control and Aut…
ytc_UgycRWAjV…
G
5:09 ok i know its not directly related to the subject but the fact that your ch…
ytc_UgxHUS-v_…
G
Let the AI takeover, people will be unemployed and no money to spend, and 70% of…
ytc_UgxCFVWTG…
G
What I got from this is that moderators lack maidens, and we really need to stop…
ytc_UgzogcBYh…
G
AI becomes the perfect employee… no need for sleep, vacations, salary, food or s…
ytc_UgwFXRKKE…
Comment
I'm ready for UBI LARP world. Everybody just does a random job whenever they feel like it for fun. Wanna be a cop for no reason? LARP Cop with AI robot-cops assisting you and keep you alive. LARP waiter. Wanna try it for 15 minutes and get bored and clock out? Not interested in job? LARP as a knight in the medieval era. People will just get to have fun and life will become a video game.
I do think we need a Mental Health Renaissance in order for everything to work. That's more important in my opinion than building the tech for AI safety. People need to be well. We have to fix the intentions of average people so less evil-doing feels tempting to do. Maybe sounds like a more impossible task to most people, than making AI itself totally safe.
youtube
AI Governance
2025-09-05T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0vpkybImPkj7x0-V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwcSaB4I_322CJkCtd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyo9lSTXO5cga4BHEd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxb3mO5U8fOTjtn9ih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwdDaBmmbca-K64_ip4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNGcWUla4cLlla1zF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzjRUQBgfSRlJmYl0t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwY-tdlArAEjtRx2nN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgySiM3pWatNX74UeVh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgydC2Qtw0SthRfaO7p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]