Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You better learn how to grow your own food, how to can/preserve it, hunt, fish, …
ytc_UgzHTTFxO…
G
just a few months ago Nvidia's cofounder told the world that their AI child has …
ytc_UgzPBUHmT…
G
Don't be stuck with slave and slave owner mentality. We need to utilize AI and c…
ytc_UgzPxPfP9…
G
idk if u know but gpt is just algorithm u cant convince an algorithm that its co…
ytc_Ugwk1N3LK…
G
The “pollution per capita” bit people are using to deflect any blame on china is…
rdc_gx7zc8x
G
Very interesting and scary interview. What about natural resources scarcity due …
ytc_UgxI8pY_q…
G
That's the thing. The technology has advanced so much since 2021 that you likely…
ytr_UgxVgOAQL…
G
I wish the AI "injection" would stop. I want to dial back a bit and go back to "…
ytc_UgxUhGWMp…
Comment
I'm endlessly frustrated by science guys constantly dismissing 'philosophers,' as if the scientific method were not itself a philosophical exercise, and as if the very questions you're dismissing weren't core to the ones you're asking. Do we have a single, cohesive definition of what intelligence even is? Can you reliably explain the difference between the intelligence of predictive text and the predictive elements of a primate brain? Can you say, with any degree of certainty, what we would need to see to know whether an AI is truly intelligent?
Here you are, having this conversation about the likelihood of LLMs developing true intelligence while avoiding defining what that is, so what is the use of the conversation? You refuse to engage with the philosophical element, so now there's no other possible outcome besides a shrug.
youtube
AI Moral Status
2025-10-31T05:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwVA8nMnvbtaBkl1zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsWyUB95SEhWn4JeZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVl_ePAJpVw42M4k54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxT4R5RhN6d7vWn3eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVoBgKgc3vBJ2NKkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoxI7YRZHVy2XR6jl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxc4S8u6T9BmYwz50F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxhvE96GGj2KI86ul94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwIskV34Cxf46XfY7N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4pkgpv4bNlAGUchF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]