Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Companies using AI won't really talk about how they use it, they'll probably kee…
ytc_UgxMAIMr8…
G
The data isn't biased the data is factual.. You can't be angry that it shows cer…
ytc_UgxR0oetz…
G
Judging from the kind of art you draw, I have a good feeling that AI bros aren't…
ytc_UgxWNl37W…
G
A human learns all things, not written in the DNA while living. So, a Robot that…
ytc_UgwatYYTP…
G
but you see, *my* company will replace all of its workforce with AI before other…
rdc_ohzq3k7
G
Why should AI not tell people where an assisted suicide clinic is, is someone di…
ytc_UgyoS0yvN…
G
AI can never be truly sentient but eventually it will be able to appear sentient…
ytc_Ugya-ZxUK…
G
I was at MIT in 1959 in a small group with Claude Shannon (father of information…
ytc_UgzKJTQKc…
Comment
Going to be interesting when AI is generating interviews just like this where its risks are downplayed by its creators. Over time the majority of content we consume could become AI generated without us noticing. Honestly I don’t care. I look forward to universal income and an end to politics. Make the entire planet into paper clips for all I care.
youtube
AI Governance
2025-12-31T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw2YucCK3tRAoXAp7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyG6Btj_Atn2lIOlex4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtIwG0c66D-xqFq8R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwltqC4tgZwpUThzj94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRisYqwzqvHBH4XbV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzmz-8sHJdFmhaFRRB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy8tDnbCs8Cj08omZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQiR3lwM1sSY9Y3MF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwAQ4tUYSUJ_DrDyz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz5hF64a2S9zWOr28B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]