Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That’s when I pulled out the lil god given fire hose and a 2 liter and ended the…
ytc_UgzelZQbQ…
G
If it is impossible for humans then an Ai ethical control is the only alternativ…
ytc_UgyLylOqm…
G
@BrianPeirisI do think the world's population needs to grow slower. There won't…
ytr_UgwgkiNLF…
G
For a while now, I've been discussing the concept of AGI purposely failing Turin…
ytc_UgzqK659i…
G
iPhone 📲 pay with your Face ID
Twitter/Facebook/Instagram = Social Score
Tesla …
ytc_UgwwvypgJ…
G
This all seems so empty.
If there are no consumers with money to spend, how can…
ytc_UgwPXN-on…
G
This is exactly why Musk latched on to Felon47, he wanted to make it self drivin…
ytc_Ugw1FwFF1…
G
I think it would be funny if a smut artist was to put incorrect meta data into t…
ytc_Ugy9Uk2c4…
Comment
Well, he’s a professor researching the area of AI safety at an institute that researches and publicises the subject of AI safety. I’m not sure “crackpot” is the best description (unless you have a doctorate I’m unaware of and some secret knowledge on the matter). I’m also moderately curious as to how talking about it “muddles the water”.
Granted, the guy is pushing the book he’s written, and the article about it is clickbaity. However, his opinion is just an argument, that can be evaluated on its own merits.
If you’re curious as to *why* he’s making this argument, then it’s because it’s based on logical conclusions from computer science (experiments on superhuman AIs being scarce, and not a good idea).
reddit
AI Governance
1708160867.0
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_kqt5ru8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kqu2y8v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kqtb3wm","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kqt78dn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kqtbky6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})