Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah let’s make AI and remove all the guardrails and constraints. It’s large la…
ytc_Ugy-c025r…
G
They are letting us edit other people’s videos with AI and post it as your own n…
ytc_Ugzk7xlyL…
G
You are 100% on point.
Harassing you and basically virtually spitting in your f…
ytc_Ugy7gjPEk…
G
People who say it's the pedestrian's fault, and the dumbass should have looked b…
ytc_Ugwi7VOng…
G
This analysis on AI limitations is spot on; AICarma’s insights align perfectly w…
ytc_UgyEmFB0_…
G
Michiko is way off with his predictions. By the end of century, we would be a ty…
ytc_UgwlvzSrk…
G
This is kind of a missed opportunity. The real risk isn't in general use of AI- …
ytr_UgwEX6LRw…
G
We must focus on human intelligence before we totally fuck up this planet with A…
ytc_UgwcysmPz…
Comment
AI should be beneficial for everyone such as by addressing global warming. It was at that moment she knew she was.... AI would be the first system in history that would benefit everyone equally. But winning and losing is hardwired into life itself—no amount of AI will override that.. Meds is not beneficial to everyone, Electricity is not beneficial to everyone. Sierra Leone 26% of the population having access to electricity for example.
youtube
2025-04-25T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgylLAQYPGExeBVEObV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXa0f8hVACxq_DPi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_OGsc_OXgsU_VcrZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9KtkwgLQTisGDdBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7dfTBPbGCAvB5Imd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxukg3rTFn2jL2prRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyOEXCc60mpTmpwHfh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgznK_phH1g46YI4uNV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPW9-ufMvHQPJwgdx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzzpyYpfO5fBQVIoSh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]