Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I literally saw the Dalai Llama suck a kid's tongue. It wasn't a kiss, it was a …
ytc_UgzXQNn40…
G
"Copywriting an Ai generated piece is like trying to copywrite a fucking wet dre…
ytc_UgwA-RpDR…
G
Hold on the two faces look very different indeed. How did the facial recognition…
ytc_Ugx0qGMc1…
G
and then people have the AUDACITY to say “AI art isn’t hurting anyone 🙄” like, A…
ytc_UgxWhMaHy…
G
@a@arynasabalenka3173you don’t think College gives you life education? College i…
ytr_UgwSF5uPD…
G
1:36 Open AI is not profitable and their value comes from investor hype. They ar…
ytc_Ugxe8Rhu4…
G
Fr like I hate when the ai bros say “Well this is like photography!!!” Like no? …
ytr_UgxoDuh5c…
G
Wait, I thought AI was an overblown "bubble" that was going to pop. Now its goin…
ytc_UgwOaLVRl…
Comment
Alex, you are dead on right about the potential use of AI to help straighten out the government legal infrastructure. The US NRC is a case in point. They have so much unintelligible trash in the rules, and rule interpretations that doing the research for anything is horrendous. Using AI to help improve government infrastructure and the information fed into that is a real time NOW use of AI that should go forward as a test bed now. Here's an example: What if we had AI use all the various federal legal content to develop mimic Supreme Court findings. AI could help backstop or inform the decisions the justices pass down. Some of the findings in the past few years seem to many simply bizarre -- contrary to the most basic tenets of the Constitution and our interpretations of it. To me, using AI for that kind of application today is a very current highly valuable use. We could learn the risks as we go.
youtube
AI Responsibility
2026-04-23T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0jZGEmmCzzkikMBB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQ6HmQY2a3kGiSktJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwx4dX6hKC-CB73qfB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyg5a6J5GDAXKcPRwd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8cyF77l0p_laKkK14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzcnNKJLbe-jrQoJR94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgySG8PPi1T1wqVcEsF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwtIwXwAsns8BusHXl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzQG2oCPWZXIUZaEYB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwSTqsoKCQPeNK0jX94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]