Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
🎯 Key Takeaways for quick navigation:
00:00 🤖 *Introduction to AI and Robotics*…
ytc_UgxM2NM1G…
G
The only way AI and mankind co-exist is if AI makes something 5x more efficient,…
ytc_Ugw71O6pr…
G
I absolutely love your art style and completely agree with you on AI "art" bc li…
ytc_UgzTFVQsY…
G
There are real jobs that exist outside of what AI can do. Let’s not forget the e…
ytc_UgxicomeD…
G
Even if the deepfake itself wasn't indictable you'd think they'd still take some…
ytc_UgwfjKBd8…
G
Claude sometimes gets confused with its own hypothesis when trying to debug or a…
ytr_Ugy6AALSe…
G
Oh I'll add another job to your list on AI safe jobs : Proctologist.
No robots…
ytc_Ugzt4426q…
G
Automation is best utilized for finding solutions to problems HUMANS HAVE NOT SO…
ytc_UgxHXhUU6…
Comment
Discussions of AI 'alignment' are flawed as they usually stem from the assumption that 'human intelligence' is the baseline by which AI should be safeguarded.. have we looked at humanity lately ? I'm much less concerned with 'AI going rogue' then I am with humans having nefarious, individualistic perspectives using AI for their personal gains. As much as I enjoy listening to Yuval and sincerely believe he is a brilliant man, his AI interventions are less eye opening and more obvious. Stephen Fry is such a joy to listen to :)
youtube
AI Governance
2025-07-23T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzUsvsPP8ngdFlARsR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYGqB5a6K4c8dYSvp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxowGGL9KETcwvMEH54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzbqfjgyXoCZG8j-P94AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3h1ywG0970hemdN14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxY6vgV-dVxYaSVACJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugy0ZvL5eOgDbXVldBl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwrb_cHMcKboZxXHYx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxi1PH-pEA9omRxEsd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyguEoBXsDH26UFXtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]