Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We not there yet. This is the Atari 2600 of AI companions. But considering there…
ytc_UgzyhB_kl…
G
Who does AI benefit? Why are we rushing to create something that brings little t…
rdc_kqsyt6m
G
thats what she doing. you believe its actual AI. not trained todo LOL you believ…
ytc_UgyJVmbbe…
G
Actors in movies too. Why would studios pay Millions to actors when they can jus…
ytr_UgxaqDO3G…
G
@johnbrown1867well that's usually the problem. You don't see it until it's too …
ytr_UgxFm4vgg…
G
This literally never happened. 95% of all bitterness and butthurt comes from art…
ytc_Ugzd4Lk6K…
G
If AI replaces my job… then I’m giving up on America and moving somewhere that d…
ytc_UgxNgUZXn…
G
I've struggled to express my ideas in art form since I was little. I've done man…
ytc_Ugys477TF…
Comment
I'm only 6 minutes in and I've already spotted a mistake. He says he's worried about AI being smarter than people. I don't mean to be a Debby downer but chat gpt is smarter than 90% of people on earth. What he's really worried about is when chat gpt becomes smarter than him. Google is already there, chat gpt is already there, grok is already there. Olamma is getting there. Let me ask you a question. if people are so smart then why are they using AI?
youtube
AI Governance
2025-06-24T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwTl3m0AXxzXTjih0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwelU_5kpWvO0TAKIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwDizRUkOTGyRP-S-94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyShMfp1bNNLbdU-KF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSSTFQ9D916LhgV-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0AoDBGXVt8HJ9qlN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYU_lZT-3PXTwWxUR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzq2O1caxtU9oLtTZt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQaz_bw9YKP1SfwfN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwn2tQBuMi381Garht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]