Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m assuming Alex meant this, but perhaps it was a happy accident: I think the i…
ytc_UgygCCjC4…
G
As both a historian and someone who is interested in the business and logistics …
ytc_UgwEfLD86…
G
interesting and thought provoking, it will be fascinating to see how countries b…
ytc_UgxwdtaF6…
G
AI is a tool, a knif eint he hand of a surgeon saves life, a knife int he hard o…
ytc_UgxNRYQKZ…
G
Nicely done, it's about her being wrong or right, it's about tf can you do? And …
ytr_Ugw0nNMS_…
G
AI abilities wont come intrinsically from itself, it will come from us, because …
ytc_Ugye5v8Ei…
G
Let's make our politicians adopt policies against AI, tax everything that's AI r…
ytc_UgwK4kE8U…
G
How AI destroy humans is first they create robots. The robots kill humans. The r…
ytc_Ugx101SBM…
Comment
If Humans don't use AI to completely eradicate Humans, we could have a utopia where no one has to work unless they want to. They can pursue things they want to pursue, and money could be insignificant and not needed if we have super intelligence and robots doing the jobs that humans do and doing it better. This is the best case scenario and something im hopeful for.
youtube
AI Governance
2025-09-04T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzGAxR0ZpL827MQCBZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcL_rA1xddx1kpSXh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIz6QS8GTcYWvR6zh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmP4RrA6s3nV2xsHp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKYpAtqDa8dRwLZkN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzR8DNTzjuftKmImeZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyx6TztHxemBz32Wcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZ782WMmQBEi8M59l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwj6BClkejWt5vtWNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqOAlBXHraWtg7rHd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]