Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All one needs to do is look at how technology in general has enabled a huge upti…
ytc_UgwsYKxS4…
G
>Real AI will behave like Futurama's Bender.
"bite my generated image of a s…
rdc_ks20x5n
G
Let's just have an autistic savant design A.I. to attempt a definition of human …
ytc_Ugxc80B2n…
G
Yeah, you should not give kids homework at all. Homework was literally invented …
ytc_UgznO07M0…
G
This is a great real-world example of where AI agents actually deliver ROI 👍
Wha…
ytc_UgxJz-e__…
G
That's really weird take by someone who claims to follow AI for decades... There…
ytr_UgyuUTz-3…
G
"Hey, can you drop off this load from Seattle WASHINGTON to Daytona Beach FLORID…
ytc_UgxmvR8Z7…
G
So many ignorants against UBI. In the future there will be hundreds of millions …
ytc_UgjHcZWa6…
Comment
If, as the guest says he believes, we are living in a simulation, then why is it worrisome to him that AI safety might be shoved aside, leading to the end of humanity? Why should the prospect of the end of the simulation that we are living in trouble him?
youtube
AI Governance
2025-10-31T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6ZPptRGN-VOz5Or54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0rhjxblJ2mWrPrkp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9fkT0DVLr3cX7Ql94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG6N-JLnNh5y13MHN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxOV814nrZYHx7R-P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw2zAAlkjF9i2qbnoV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxeDd9h8jgQNF1x4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz05BdhmBXFP8PG8Lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-TVhAr6x0uOdNy194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgyDrfbptNcCFWv6B654AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}
]