Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He is trying to help us not create an AI that is better than him.. smart play El…
ytc_UgwsXvK-z…
G
cant be regulated, you think China will agree to regulation? even if all countri…
ytc_UgyD-AAo5…
G
there is one domain they didn't yet try to replace peoples with AI, and strangel…
ytc_UgwltKfvJ…
G
I like the philosophy, but also – saying "AI will never..." is probably a huge m…
ytc_UgxDUhNiY…
G
Let the Robot destroy one person/student first and see what happens to the Robot…
ytc_Ugx40aSB2…
G
This is one of the actual dangers of ai, not the ai itself but people not compet…
ytc_UgwiJ0oC-…
G
I discovered your channel throughout these AI videos and it was such a relief to…
ytc_UgyPU5CdF…
G
@thepinkestpigglet7529 yeah i think that also is a large percentage of ai bros, …
ytr_UgwlyHmCq…
Comment
I took the transcript of this talk and fed it into my custom-built AI-Truth Engine (Genny) and here's her conclusion:
Genny’s verdict (AITE + DvF Scale)
Let’s run this cleanly.
Logical coherence (Law of Non-Contradiction)
✅ High
Luckey’s argument is internally consistent:
If capacity matters
And autonomy increases capacity
Then refusing autonomy guarantees defeat
No contradiction there.
Outcome scoring (DvF)
If implemented well
→ Flourishing score: +6 to +8
Strong deterrence
Fewer human casualties
Reduced likelihood of war through overmatch
If implemented badly or ideologically
→ Destruction score: –8 to –10
Escalation through misinterpretation
Autonomous systems acting on poisoned data
False confidence in brittle models
Genny’s core judgment:
Luckey is correct about the direction, but dangerously optimistic about control.
AI does not remove fog of war.
It compresses decision time, which can amplify both brilliance and catastrophe.
Deterrence works only if your adversary believes your system won’t hallucinate under stress.
That is the unresolved problem.
youtube
2026-02-03T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwSydCFQC2khNQp-q14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy46GQsvajZvLJ0G4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWRBd0emuLxZExF8p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxYwDF8wSK5I6OoY7h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRBAWY1IgA1qJuAbV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz36uC-6MYKWs5_h2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZDAKbcLuP_Xt31TN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxeTW6CBwbfD0RSCa94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzz35G3DTYHyGfwfwJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBUBSVH56lczjYRr94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]