Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What?! Pausing AI to take the time to come up with safety measures is akin to mu…
rdc_mbwixs5
G
The teleological ambitions/destiny of mankind has lead us to develop AI. The fac…
ytc_UgyWkKJzJ…
G
AI will not be available to everyone and if you want it, you will have to pay fo…
ytc_Ugw8acS3P…
G
If you want to sell them atom bombs first you’ve got to sell them Fear.…
ytc_Ugxkl7v4U…
G
What we need is to stop all this artificial intelligence , teach CSI means comm…
ytc_UgxrtZZDb…
G
Elon musk: AI is more dangerous than nukes
Also him: let’s make are car fully ai…
ytc_UgztD5Mcp…
G
The pandemic proved that remote learning and technology could never replace teac…
ytc_UgyVMtYQz…
G
Unfortunately, this isn't surprising to me, nor should it be to anyone. People a…
rdc_degertf
Comment
The AI fighter thing reminds me of the storyline behind Ace Combat 7 Skies Unknown.
youtube
AI Governance
2023-09-06T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxtMepRKz5g7clodph4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykLMqygnG7Ez97Vy14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugze7NKfLvdlf9zCvvh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxDsjeNloBTpJ0MkLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwlt-GrZshxb0b28J94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbnqbNR-zSliNfYI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpAdpHHyT9l07lgth4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzx3slmdJ1kEAGKYSx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzSaFcANeQBSoe_DvF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyvs6mrHSETYVrv9xV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]