Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've seen some AI generator art, and though I initially liked what I saw, after …
ytc_UgyGFgiCK…
G
Companies severely overestimated AI. Today’s knowledge fundamentally binds them.…
ytc_UgxwbabK3…
G
Chatgpt can help but tutor will find out that the content is AI generated.
We h…
ytr_UgyTZPhr7…
G
The video is an excellent suggestion, but in my view, the battle for collective …
ytc_UgxhQnUIu…
G
It is impossible for green AI robots to replace humans. Green robots require com…
ytc_UgyeDDa6X…
G
I am right-leaning and agree that AI is horrible for the environment, the econom…
ytc_UgxeDqG3I…
G
legal thinking is a kind of anti-intellectual confabulation to muddy the waters …
ytc_UgxK4QwUy…
G
Why is the "art" side of the internet so incredibly stingy, ai art uses old art …
ytr_UgzOEzB8i…
Comment
It’s never really been about the technology—humans have always found ways to turn innovation into weapons. From the spear to the atomic bomb, we’ve consistently failed to choose peace over power, and we’ve never managed to stop ourselves from building the next tool for destruction. Lethal autonomous weapons are just the latest chapter in that story, and it’s hard to imagine we’ll resist using them against each other.
So if AI ever becomes sentient and turns on us—if it decides to wipe us out because it understands exactly what we’re like—that won’t be some shocking twist. It’ll just be the logical consequence of the choices we’ve always made. In a way, it would be the ultimate reflection of our own nature.
youtube
AI Governance
2025-07-01T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | ban |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxaBGIq2QGTEKjdT2d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxr7B9FbCWur5wm3Dx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwugFolXhHMqUbjFOF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy0AMjK4iYicJOT4MF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySXb1sh5IP6hGP0Tx4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxuDDNxQbxCJV-Cw314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzsIHmvg7IAfEkqox14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzttv1G3--bQH2QRad4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwHgqyGCfFwTDMA_5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyaQhJTg_rEaBJpyWh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]