Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe AI is growing so fast because of the rapid increase in m8nimum wage on…
ytc_UgxHpIDDj…
G
You might ride in a driverless car 😂 but *I'm* not gonna! FORGET that sh*t !!…
ytc_Ugyj-NaLu…
G
Just have a law saying 80% of profits made with AI go to the UBI fund. They woul…
rdc_kig9tu5
G
If you cant distinguish between talking to an AI and talking to a real person an…
ytr_Ugw2CAOr_…
G
Lol too little support from the government.
Japan is a shitty country on many re…
rdc_gspb42i
G
I don't really understand why everyone is so happy. Doesn't anyone realize what …
ytc_UgyeG77yk…
G
isnt crazy that the billion dollar companies could just infringe on everyones wo…
ytc_UgzImDeLw…
G
Wow😮 ai made me eligible to be among first ten people to click on this.…
ytc_UgyAYBfoZ…
Comment
1. Is there a fundamental difference between a meat computer and a silicon-based computer?
2. Is there some fundamental limit on how intelligent an intelligence can be, and if so, are we at that limit?
If your answers to 1 and 2 are no, then superintelligence is possible. If your answer to 1 is yes, and to 2 it's no, then we can just build a meat computer instead - and superintelligence is possible.
There's really no good reason to think that superintelligence isn't possible or even likely. There are clearly some humans who are far more intelligent than others in at least certain areas... why couldn't a machine do that? That indicates it's not necessarily a difference in kind ("normal" vs. "super" intelligence). If the thought is that our current methods won't get us there, fine, but that doesn't preclude us changing our methods; or AI changing its own methods.
Without a fundamental limit on intelligence, if we continue to work at it without going extinct first, I think getting there at some point is highly likely. And whether it's in 2 years or 200 years the same dangers remain.
youtube
AI Moral Status
2025-11-01T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzr2YCzK_q4lkbIxJB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy3v0D148fR819XJk54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0bErxdWkj30RXj6N4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgO7pSfi5-FgMGhVB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGHYD_S9emAirkbLd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxrZKpezA8USTxo5JJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyjjDyJr2Wnl_iV1Z94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwP7erQ0nFSth-1c9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzEgOovjolxEbt9hSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzsyqQA4rZGQtp73eh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]