Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate it when they do that. Until I retired I was working on several of them ro…
ytc_UgxSn4d5A…
G
sounds as if, at some point, AI will tell us what governments want us to know, o…
ytc_UgzgfDVGv…
G
The worst part is that the AI image doesn't look bad, and that's probably a thin…
ytc_UgxT4TtiO…
G
We should create a pact and ask all the nations to sign to not use AI in weapons…
ytc_UgxY6zjR_…
G
Nope.
Sorry.
WE ARE THE TECHNOLOGY!!!!!!!!
Most people just haven’t remembered …
ytc_UgzI-_8gI…
G
This makes me want to develop an AI that only draws on poisoned art. Ah, the fre…
ytc_UgzhNvwTe…
G
useless tech, they aint prisoners, they aint robot!
if you want them to pay atte…
ytc_UgyBoV8Gt…
G
Tax benefits based on how many humans a company employs, and heavy taxation on b…
ytc_Ugy6dDxCI…
Comment
She is extraordinarily ignorant and dismissive about the existential risk from AI. At the rate that artificial intelligence is improving, it is highly possible that we are going to create a superintelligent general AI in the near future. And we have absolutely no clue what such an intelligent entity would do. We would have created something vastly more intelligent - and therefore more powerful - than us.
Would it care about it? Would it have its own ends? Would we be in the way? Would it value us, or would it see us as a threat.
Such an intelligent entity would *absolutely* have the power to annihilate or just disempower us; the only question is whether it would choose to do so. And we have no way of knowing the answer to that question until it is perhaps too late.
youtube
AI Responsibility
2023-12-11T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxlrxbtViBQci8GkaZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzsC3ZZblbt8Hk1vl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwEu7IGlGfkbGNwgmN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxP98tigJdDgk6n-0F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugywwxxa90S517IbSH54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxLIJxW-pYOEt7X1fF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxkHN_0MfBq6a-cMBZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxUeLD2H1qwIbhcaIl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzkl8jzmxokx3KjwOl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoldkqMJX_-cXTTvB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}
]