Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We all knew about AI back when we were in 4th grade learning BASIC code. But som…
ytc_UgylxqDoG…
G
The notion that for General AI to be human like real humans requires some sort o…
ytc_UgzxNjOJ3…
G
If I may, everyone will be replace, big CEO and multimillionaire included. AI wi…
ytc_Ugxhz4ddo…
G
I'm a disabled artist and my disability never rendered me incapable of doing …
ytc_Ugz3JoNRI…
G
If you use AI the correct way, it can actually help you learn. However, a lot of…
ytc_UgwQIlQ37…
G
Or willingly volunteer to stay in this broken world. We all know it is broken an…
ytr_UgxOGg8UN…
G
The fact that people genuinely believe this makes me hope ai actually takes over…
ytc_Ugx5LpGjZ…
G
As a cognitive scientist my take on the current understanding of human conscious…
ytc_UgzxXbLfv…
Comment
Very interesting interview. It's good to see a moderate conversation without extremism, which is difficult for me to find lately (maybe because of social media).
What caught my attention was the new NATO agreement to spend 5% on defense, and the clause from Europe saying that the AI regulation does not apply to the military!
With all this development, shouldn't we be cautious about the future of the world?
youtube
AI Governance
2025-07-09T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxU2bWA457z-QMpgvZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4zzloMA9x2hVbbfF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxq2hJdasqFtSf20254AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxYPDFeGPjEVi72wwJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxiHsIXSroI9pIBIoh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzNVCDhZlG2JmGJaiF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx80ZqjtFaPu1rArIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugzt5AGoDN7iibkaHRp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzFiOCm4Ap4bo9dKSB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_OlHWYtfvWH-48fN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]