Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too bad you lost focus during the discussion on a couple of occasions 😂.. Jokes …
ytc_Ugw99ulE-…
G
Imagine arteficial woms, and this could go realy crazy fast.
Not that I am for…
ytc_UgzLw2OMG…
G
Well, something did change - GPT-5 is a MoE of tiny expert models, way too small…
ytc_UgwkeTvW3…
G
“Brilliant idea, isn’t it? Let’s trade centuries of hard-won freedom for the cha…
ytc_UgzQw_ioV…
G
My biggest concern is that people will take the instant and nearly free AI image…
ytc_UgwQdfvOX…
G
it has the potential to replace an incredible amount of jobs, but who buys stuff…
rdc_mnpb9nu
G
I could have watched a five hour interview with her. I suppose I'd better read t…
ytc_Ugzgq4yoo…
G
PR firms make *huge* amounts of money off manipulating reddit.
Every time post…
rdc_odjmcx3
Comment
I like AI/AGI, but let's create it smartly. I fear we are creating an intelligence without ensuring we have viable measures to counter potential risks posed by AI. I don't think we'll ever be able to fully control AI/AGI, but we can be intentional about creating measures, restrictions, limitations, safeguards, layered multiple factor authenticating processes. We should have contingencies in place. Because one nuclear detonation, or EMP, could send us back to the dark ages.
youtube
AI Governance
2025-12-07T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyPNnDjzvWET6UOIjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxyscs1mSLmFQ0r-B94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaeJW7gFT2qWedbn54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGbLiyTZxqI7aCXs14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3cAxJI6zJELtqLJp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNao5FRkGL2gNTAZx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJbPpNnfR0UF3cR4h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwdyQ7wAcrbF2AIGfJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyypUOjL2kggO_n-NB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz-QDw1y_y-7xMhIpd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]