Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I WISH I was AI the way I am continually getting blindsided by social BS. 😂…
ytc_Ugx1eXWF2…
G
If a bad actor is using a superintelligent AI, the only way to counter it will b…
ytc_Ugxpvh9Sk…
G
Clearly the CEO or board that approved this ad needs to be replaced by Ai as wel…
ytr_Ugxuvx7Tb…
G
the rate for young men is already high, AI isn't the only object to blame here…
ytc_UgwPrr_mz…
G
why should cats exist? they're not useful, yet humans take care of them.
why sho…
ytr_Ugz7mmBmU…
G
I have already interacted with an Ai CSR Agent. It was with a GIGANTIC bank whi…
ytc_UgwdqDA3y…
G
You need to remember that Cuba always been wealthier then [other Caribbean natio…
rdc_f9fkdcw
G
The google employee that was fired for warning us all in 2024 said this same th…
ytc_Ugwypu5Mx…
Comment
I wouldn't put too much stock into anything coming out of MIRI or the LessWrong sphere writ large. Soares and Yudkowsky don't have backgrounds in machine learning or cognitive science; Yudkowsky is an auto-didact and Soares did comp-sci and econ during undergrad; those are the qualifications they're bringing to the table here. The authors are just game theorist bloggers and amateur logisticians who are attempting to apply the pseudo-philosophical framework of Rationalism (which has almost nothing to do with the actual enlightenment-era philosophy) to a theoretical emergent machine intelligence possessed of maximal rationality that simply does not and, in all likelihood (per the Chinese Room Argument and the broader shift away from the computational theory of the mind), will never exist. Much like that ludicrous AI 2027 paper, it's self-serving AI hysteria that completely and utterly misrepresents the moment in favor of drumming up an apocalyptic fantasy that directly advances the interest of the authors and their largely bunk "research institute."
youtube
AI Moral Status
2025-10-31T08:0…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxqeZPWCijSy8vLmfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRunnBJ6JZkIyL7Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyXvv2Mh9QHyvRqQIl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwqt9QWbFbNyhP3k5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ6cX3vzGK0IYWCip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsZXVqHuryCnOFNR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeD4KB3mZTSgAfyTt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdrjBu_20OJFahPuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugzgpt1tdS4toFzLxIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz957vNq8JtwrGAZ3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]