Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So doesn’t that mean that some of the comments in the video are actually in supp…
ytr_Ugykmn6QH…
G
Yeah, and the title of this video isn't Waymo. It's self-driving cars, so he see…
ytr_UgxDmubMi…
G
Have you tried Google Antigravity with Gemini 3.1 pro? Works well for me. Coded …
ytc_UgyMBdqKm…
G
The AI. Customer service is absolutely terrible super annoying inflexible and in…
ytc_UgzugDUZB…
G
How do we convince AI to follow the golden rules? Change the word "God" to Life.…
ytc_Ugw1kgnHU…
G
The biggest mistake they made with AI is calling it “artificial intelligence” or…
rdc_ks2mpva
G
AI is a good thing, without it we wouldnt have robots, or technology that helps …
ytc_UgxVduHAO…
G
I can't draw and I don't even pose any of my AI crazion's. I guess I having the …
ytc_UgyTHc3Rx…
Comment
AI will do everything better than us, but it can't connect it all together. Say you have a website and you sell fishing tackle... you need source the stock from a factory, create the branding to go on it, order the stock to the office, photograph the stock, edit the photo, upload it to the web, advertise it, pack it and send it. AI will be able to do much of that... but one valuable person can be both working physically and digitally. Humans are safe for longer than people think.
youtube
AI Governance
2025-11-19T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIShsqcD7dcQBvGgl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugww1DAAc2BvSJZqlzt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzaOXESeN0B7NVZYeh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgygW72jSt5Ymn0JP-d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVJwpA0bx50wm4AYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwx49GKJjItl65s_VB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYD3lM6h5wRimt9n94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2RRQqu3OjLDABX-F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXaVKqF9qdVg-nzHV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzOnUCN6AL8KXNpQax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]