Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:27:13 Mo Gawdat thinks if we're kind to eachother then AI will learn from us a…
ytc_UgwSn2kym…
G
This is all hype to sell microchips and services. AI can't think. All it does is…
ytc_UgyfiUyW5…
G
When they do "learn" 5 different things in 2 hours and all of it is in a.i. to "…
ytc_UgwxaI3Rj…
G
When i see a self proclaimed Ai artis claiming its their work when using Ai.....…
ytc_UgwtK8V3L…
G
4) AI sees human brains for the efficient computers they are. Systematically "up…
rdc_nxpnpql
G
If at some point AI content starts to get resampled back into the AI system the …
ytc_UgwJtIOII…
G
Thanks for the video.This is the kind of thing I like to here about. Info that I…
ytc_UgyZ0LAY5…
G
Man if I were that AI guy and Miyazaki reacted that way to my work I'd shut it a…
ytc_Ugz_upc3G…
Comment
It seems to be an unpopular opinion judging by the top few comments but we should shift the focus on stopping killer A.I. production to creating numerous fail safes. Once we discover something, we will always pursue it. Either with the public's approval, or in secret. Laws put in place to discourage killing of civilians and terms for surrender without killing innocent bystanders may be more productive than trying to stop the inevitable. This way we may buy ourselves hope for less destructive wars and more time to learn how to "pull the plug" on these killer A.I.
youtube
2020-02-03T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzL0pUMLLwL1ct1UcV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzPgKgzmi6ht-zyIm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwVoJoWz1X0ALjyI3N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBb7bGF9NefEGD_K14AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz-19ekQVhFElTInsN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyMARCQefsn0MDvXTp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzR5VxVqXBDei8wyyx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzh3M4DPh9TlNuB2Ox4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxUvujdDSASDkqD3lh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyM85k0NGfHG-MmcSd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]