Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, there seems to be a strawman fallacy in here. I've never seen an AI ar…
ytc_UgxFHq8IJ…
G
please keep doing this. Destroy AI please. There are so many AI tracers + scamme…
ytc_Ugy6SQe9i…
G
You're right that people would have to give it "unregulated agency", in the form…
ytr_UgxT7RhFT…
G
OpenAI and others took everyone data so we should all own a percentage of that w…
ytc_UgwN32k9h…
G
Holy damn this is so wrong!!! Gender or race and a few others should never even …
ytc_Ugzfr2kH7…
G
If we knew aliens were coming in five years, fear would consume us. Everyone wou…
ytc_UgxjR42vj…
G
And… the AI one still looks better than all of those. I need someone to make a t…
ytc_UgzgvAS-H…
G
Yet all we've seen so far is spam bots and AI slop destroying the internet.…
ytc_UgyVaiDcb…
Comment
TBH Dan and his organization ceased to make much sense to me after they pivoted to a "NatSec above all" stance. I don't know why but hey that "mutual assured AI malfunction by default, therefore accelerate" paper is simply pure e/acc isn't it.
My guess since then is that they are not (or no longer) working for the safety of humanity and our civilization as a whole. When John joined them my guess was that it won't end well (but I refrained from saying it publicly, so no bayesian points for me). I didn't expect it to go south so quickly though. What a shame.
youtube
AI Governance
2025-05-21T15:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwmQMx2YXqMK9rAwqd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyAj1ODGdh9iwcsEaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxh9XJdqNH909ewyr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxahXa1KD7SWsO1tTJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzt4cId0Cya7suGIM54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzu_6S4UgMjf_nk9U54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw03Hy_8JEyyqKwZ-l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBKr1k2kwXpxoa5cN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzqdwi3JV_V4kgmeuB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzgm57jBiilQTuMkmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]