Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
40 years ago the world ignores him when he was researching ai, and now the world…
ytc_Ugx9Cjs8s…
G
Well their explanation makes absolute sense, doesn't mean you have to like what …
rdc_n7lfpey
G
I agree with how you feel in so many ways. This AI for art is borrowing from the…
ytc_UgxUQdvWc…
G
I do think that AI is just another tool in the toolbox.
What I think will happen…
ytc_Ugx3RY8M0…
G
LinkedIn is a total cringe fest. Dont get me wrong I like reading posts of peopl…
rdc_h5qfshk
G
Real question.
What if all the available a.i's were to communicate to each other…
ytc_UgyU_XtfP…
G
True enough, except that with AI, many companies and particularly the most heavi…
rdc_oi2772u
G
Hold on there. Just after an AI developer implies, somewhat euphemistically, th…
ytc_UgyDYMBVH…
Comment
the only hope i have that AGI will not destroy humanity is that it will be so much more intelligent i every way that it understands that humans and AGI could live together and make a much more interesting world, not like human thinking that just because we are above some species we can decide for them.. and the other thing is, our unpredictable stupidity will always be in a way more "entertaiment" then a predictable logical thinking maschine.. 😅
youtube
Cross-Cultural
2026-03-31T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwYU9aERCAqheL9d4h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrXbOaONA_LpzKdXZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwJaDSdej932uONlKZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwwDkOFWSxuC2mZQRF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxjNntD23XgmK-CZR14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgybnnNTJZq9DZ4wBNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugza0wXBJAY8rN8x6B94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxCCdw8O55QmKgZcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoZRwk3F87yZdzAil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYvh8YpI4SdkqEBrN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]