Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI is a bad decision in the first place. We've all seen what happens in …
ytc_UgiltTSEW…
G
The godfather of AI has great concerns. https://youtu.be/giT0ytynSqg
He suggest…
ytc_Ugz_ts6cG…
G
Hi Siram, we are sorry to say that you got the wrong answer but in any case, the…
ytr_UgyQccel-…
G
This is not addressing safety no way will every be safe for a self driving truck…
ytc_UgzSPYw9n…
G
I’m afraid of that Deepfakes will be able to do in the future and how they could…
ytr_UgyjiP1bK…
G
The question is: did it spat out candidates who were unqualified at a lower rate…
rdc_e7juc4u
G
Axalem enhances my coding sessions by helping me set clear boundaries while work…
ytc_UgytHwq3y…
G
This robot are going to turn back at you let’s bet remember don’t bring this tra…
ytc_Ugw4ccbIl…
Comment
Crybabies. Who cares that there is a 10% of ending humanity. That’s still better than living like this for the most of us. But there is a 90% of living in a utopia within 5 years.
youtube
Cross-Cultural
2025-10-12T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxoX31b2eE2FWy2zch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfTqQwVdvxD5thA8h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcpmxH9Jebh1yM5b94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwznDgxLCdJkFSiyyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyGRd56I0XPk8ZUa7l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAZ-yCFq1Momh12MZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3deeIRLoT85g_UER4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwVRL18GK9BCckPAxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2TXQOuH2g9yqgAkl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwj0nlBdGHesj9hau54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]