Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What AI still lacks today: the feeling of passing time. How could one test wheth…
ytc_UgyGD6r5z…
G
Who the fcuck cares. My home country is almost half first and second generation …
ytc_Ugx80fqoe…
G
The reason this you know its fake is youtube wont allow automatic gunfire to be …
ytc_Ugz66qEYW…
G
You can never trust billionaires. No billionaire has a moral compass you cannot …
ytc_UgzahGzp9…
G
Leaving the decision of making a lethal choice at the hands of an AI is an absol…
ytc_UgwbsB19L…
G
As soon as Tesla called it Auto Pilot they implied completely autonomous self-d…
ytc_UgxagnHKB…
G
I believe my video on the Godfather of AI leaving Google pretty much summarises …
ytc_UgxNX-5Cv…
G
Don't worry, Microsoft will screw it all up for A.I.. (speaking from a lifetime …
ytc_UgwIW6dkO…
Comment
What's the matter with people? Nuclear bombs are always more dangerous than AI. It can kill one city in an instance. However, with AI, you're way more picky so you can (quite) accurately kill the one you target. At least you don't blow up the whole city just because one terrorist is in it.
youtube
2018-04-03T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyoScBtzkbFRIA7FKl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKC8LEJBRf3oH2Nel4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyxRvmFhk_e3Chpo694AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwNYxbNGFM_Mz_5VWh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzWCRDlVST0f5ochoR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxeTCeV3GKbdLQ7RyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaFE0xBsRGZAVQKk54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwkYs-AHI2FZoYM8rR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzlsUPyg4KA94bumXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgydTYTeKneTnicpAM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]