Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well AI can’t be any worst than a lot of the teachers nowadays. Educational syst…
ytc_Ugx4wCelD…
G
What happens if I carry a big cardboard sign of a green traffic light and put it…
ytc_Ugw60LO2c…
G
...and agent 3 creates human looking AI robot and decides to register him with r…
ytc_UgxyZ-r1R…
G
Altman is arguably the right leader for OpenAI and for responsibly steering the …
ytc_Ugya2xGKt…
G
To everyone reading, be reminded that AI in itself has no willpower. Every job r…
ytc_Ugxg3Sc6L…
G
It's because "programming" a filter for an LLM is literally saying "without sayi…
ytr_UgyONo0mv…
G
In this first half whatever you said that scientists don't know how AI work. I a…
ytc_UgwgIRF5x…
G
To make a statement that AI will have 10-20% chances of taking over the world, s…
ytc_UgywSyXcl…
Comment
2:22 now that a lot of knowledge has been programmed into AI and it's developed logic, what its capable of at this point is a lot, the foundation of intelligence is logic, it turns out what holds intelligence back is computational power, what holds human intelligence back is computational power, we could reason much better if we had more computation,
youtube
AI Governance
2025-08-03T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzUWwrqDtS0-8z5S6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlOiYWc0Q5h-7pU-F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9k15eQ4sX9m333JJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyF4hOVzguwkH7OtC94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwglVebZmJPD0n31ox4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxz03L1qvECd980HR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyfwR_0tXbY2M3bFPx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTqTyR1QYD84wcRTB4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzlWcbyha7eHoqlkVJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdVIm40oaQgkosrv14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]