Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@user-og6hl6lv7pthey said Epstein island was a conspiracy theory, if people had …
ytr_UgyRqxaWP…
G
Everybody talks about Open AI and Sam Altman and safety concerns, could also be …
ytc_UgyGkoX9U…
G
A lot of what these two ass clowns produce ( Yudkowsky and Soares) honestly read…
ytc_UgyfUu7tZ…
G
I can actually understand how facial recognition could make false matches with b…
ytc_UgxFPurcN…
G
@The63adrian Sure, yeah that's kind of old news atp though. Ig I'm more asking l…
ytr_Ugw2XvO6r…
G
AI is not able to hold a conversation at all. It is AI assistants that hold the …
ytc_UgwpLoDRT…
G
When ChatGPT detects harmful conversations deep into a rabbit hole it just needs…
ytc_UgxMh4w1N…
G
Yeah, its a weak argument. We do use power tools, but there are plenty of power …
ytr_UgwlAb7ZZ…
Comment
I think I agree with everything except one: you can define 'intelligence' in many ways, and maybe for some AI is a misnomer, at least in the current era. But I think some core property of intelligence is that it is a form of problem-solving, decision making, and learning. Artificial intelligence is then to me nothing more than the quest to automate intelligence, i.e. to automate problem-solving and decision making. When you say 'is a actually a problem of all automated systems", I agree, but in this case the system is one that is built to automate high-level thinking, decision making, understanding, generalization, scientific research, coding, writing, ..., and those are the very thing on which all of human industrial productivity are built upon. When the thing that is automated is industrialization, non-intended behaviors become bigger than what they would be in small localized automated systems built to be good at one thing. If AI is used, anywhere, then it automates, and it automates, as you said, in a way that is hard to understand. I would also say that "superintelligence" does not need to have anything to do with sentience, or the notion of intent; I don't think the issue is anthropomorphization at all, except in that we can be fooled to think we can trust it to think in the same terms as we do.
youtube
AI Moral Status
2025-10-30T20:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzczosQYWlhu4gNJCl4AaABAg.AOv-s3FOmrWAOvlatiK76M","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugx3nSuDFDjpcBaDBdF4AaABAg.AOv-oK_sjhJAOv9g7gfeT1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOv80RLGal7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOv8HYMTXbX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvA2T7sLay","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvAhqaz2Go","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvDOnLnfqS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugy-H-lkhzRZ5AlKyL94AaABAg.AOv-O0chSbaAOwasIs7OId","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwmIejXdDmc3nz1Zy54AaABAg.AOv-G3_fRc9AOwR7xdoXeU","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwYSjrR-3YQGIB4WPl4AaABAg.AOv-FgYG3tmAOv3XHim0r5","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]