Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Enforcing immigration law isn’t against the law. Read that sentence again if you…
ytc_UgzxA64_k…
G
"We're gonna defund abortions and make the Netherlands pay for it! When the Neth…
rdc_dcwur99
G
I guess it depends what they're doing with it. It's one thing to trust Github to…
rdc_l57ibcg
G
cause its ai strapped to traditional lasik?
○ It creates a digital twin of your…
rdc_m1ybgnw
G
People kept telling me about ai so I chatted with one, definitely dangerous. Thr…
ytc_UgzvdiwfE…
G
The problem is exactly what he is describing at the beginning of the video. Whil…
ytc_Ugwtfob52…
G
Yes, if you use Ai badly you will get bad results. Shocker. All you would have t…
ytc_Ugw-T0rV_…
G
The idea that disliking blatant AI theft or being critical of AI makes you a "lu…
ytc_Ugxs9LbVh…
Comment
That last item - the idea that a super intelligent AI will also have difficulty in predicting the future - that's a weaker defence than one might think. An AI doesn’t have to be perfect at predicting the future to outsmart humanity: being better to the tune of one "move" ahead better than the best of us would be enough - if combined with suitable strategic intelligence.
Granted, that's a hard ask because generally the difficulty of prediction grows exponentially with the number of moves. But it's not inconceivable that, say, an AI has an order of magnitude more predictive capacity and can therefore climb the exponential prediction curve by one pip more than the rest of us.
youtube
AI Jobs
2026-03-23T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxc3AIebUSSPHyX-OV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw49ApdyIlMWaZLwl94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3fb_zzgDgniqZPwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjE6DMk8bpCNb2nKd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugznj6spK6yXAr4LiCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1nxmeZkBfBNH9AlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCw4BUs2UVbF7e9wN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYADLwEpsyvaADiFl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyczZ-xxlVUNt_mXZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwgzTqtfKnEWzjMEWt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]