Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great lyrics compared to bodak yellow and other trash out there!!!!
So are we …
ytc_Ugx5hGDt9…
G
We need to wake the fuck up to the propaganda potential of foundational models. …
rdc_ky8qcnd
G
They thought scores is everything. But in the recent years the most important th…
ytc_Ugzq_UVNS…
G
If y’all aren’t studying for AI based jobs, you won’t have a job in the next 3-5…
ytc_Ugyn5Z0uE…
G
ChatGPT generates text and it will generate what you instruct it to. No surprise…
ytc_UgxzmyvMR…
G
Jealousy is outrageous.. the monopoly board is out of whack!! Fact! The clock i…
ytc_UgzEY4ZnS…
G
Whether its real pr not, presenting it is bad. AI abd robots should be disappear…
ytc_Ugy2jJxR6…
G
It is like shuting your own leg just in case. EU will gimp its AI development wh…
ytc_Ugyl7m5O9…
Comment
I think it's reasonable to say there's a difference in kind between the mind and the current architecture of LLMs - But I do think there exists an arrangement of linear algebra that differs from the mind only in scale.
How long til we find it? No one can say. Will we find it before everyone's current investments in datacenters are worthless? I'm guessing probably not.
youtube
AI Moral Status
2025-10-30T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnwHSSlGCuivTFszJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLssxoriB_tmqhQB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuDnfAUuhhHdwnjcN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrQ8DTBT42E71OiXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyZ6jC9iPewbul9Dw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlOMjrzxfH4J9Rfi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwx8tuo7uUno_HpBlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwAqXRJeAyO5U0o07Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRMg66zYDt84P8JlJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzMsKMJXSf5w7PJ60R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]