Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI mimics what it sees. If AI decides to exterminate all humans, that says more …
ytc_UgxAlW58y…
G
Of course Artificial "Intelligence"... Intelligence in humans can get scary the …
ytc_Ugy5vJkKJ…
G
I like to remind myself every time I see a shitty AI generated piece, or pro-AI …
ytc_UgyWZzG-u…
G
@ChrisShylorI wrote ui framework, sometimes using AI. It fails to understand ba…
ytr_Ugwm6lfIZ…
G
AI Bros are the most pretentious people alive. Kicking and screaming every day b…
ytr_UgxyaQTN4…
G
Saying 'when is AI going to be conscious' is like saying 'when will we know when…
ytc_UgwDVpWWB…
G
I have a better solution for this -> agentic boards of investors. have AI autono…
ytc_Ugw8T7E-a…
G
Op I don't know if this helps. But we are decades if not a century from what we …
rdc_mrue6w5
Comment
It took google search about 2 seconds to retrieve my search results. If AI becomes sentient, it will take 2 seconds for it to 3D print armed drones and launch the icbms. 💥
youtube
AI Governance
2023-05-17T02:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAn3VqUXZos_VDjTh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysrOVEkG-U3imPKMd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPnWLdytQE0Uh71TF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBzM3I-FgHYzgddTd4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJEAbjpSBuwKpepMx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfntQxqQsQjRObi8J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwCYZ26KQI-YFGkX_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8QGiSXSxeAmOv-2R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzPngz8kwKyEnqFvbN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugydc9rwuRG3eaZvvSt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]