Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I expected to hear about how studios are profiting from AI at the expense of wri…
ytc_Ugx7OF3jk…
G
"In the age of AI, it is clear that human beings will rely increasingly on advan…
ytc_Ugwk84XtE…
G
If u really care for the unthinkable horror yr invention can release in the nex…
ytc_UgxnlOZT9…
G
Was she ever in Denmark? We need her here too. Lots of AI-centers wants to be bu…
ytc_UgwYGmEKo…
G
These Waymo cars need to be banned. Whoever invented these self driving cars is …
ytc_Ugw-SruRB…
G
At this point, AI just plays along with whatever people ask. The AI registers as…
ytc_UgzoYafH9…
G
At 20:40 he refers to the intelligence explosion and says self-replication won’t…
ytc_Ugz0kaX2R…
G
Your video is a bit late. Ai took over 30% of programming jobs already!
So yes ,…
ytc_UgwBcxsiU…
Comment
I am listening from Italy, my question is, why don't we teach AI the basic things that are making US humans, the best that a human being can be: feel empathy, compassion, solidarity, feel the need of cooperation and helping and caring for self and other, feel spirituality. I think this Is the only way in which AI can be complete and that the humans can be safe. In these conversations the center Is the brain, the logic and analitic intelligence. But there is much more in a human being. Listening to these scientists I hear cold people with cold logic. They cannot teach to the AI any emotional intelligence. That's the issue.
youtube
AI Governance
2026-02-13T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx7hUe9PNVhI9JITHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyyRmEZLx5xq7xmf5R4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugww5L6LhJkdt4NJ7w94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxPJOiDgl2N833aa-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBYqXGAjBYndsCtQN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfenQVDrIz3LYvx-l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgztUqH8Zj0y01UZHJ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiMbW2R10zSEN1bNx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztpfR-_trXKFkSYnN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwlfZZbzz5LPtqgKQ94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]