Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've been lucky enough to be able to weather this storm financially this time ar…
rdc_gkr0dl1
G
As someone who spent a considerable time trying to learn about history form all …
ytr_UgymzeLFb…
G
ai can create a point of view. for now we need to prompt it to do so, but once i…
ytc_UgwYHBqT7…
G
@karleerenee9486 maybe you're right. But Sophia was talking about her stolen wa…
ytr_UgyYkeCGD…
G
It would be useful for some additional context information. I haven't driven an …
ytc_UgxHEAKhA…
G
THIS IS SICK AND DEMONIC, TO ITS CORE!!! MANKIND IS GETTING SICKER AND SICKER, …
ytc_UgxhBPSHH…
G
AI does not have organic desire or motivation; only binary bias. Algo contributo…
ytc_UgxfynXqu…
G
HEY !! It's that womans friend who had her face torn off by the chimp ! Wow...…
ytc_UgxWZxw_k…
Comment
Sorry for not agreeing with most comments and Howards genral beliefs in the state of LLMs.
But from my knowledge of how these LLMs work, and I have programmed them from scratch, albeit small ones.,
they are merely pattenmatching by doing matrix calculations.
They have no idea what they are doing, no abstract model of the world and by no means are they aware.
For a simple example look at autogenerated subtitles and see how often they display words that are made up or wrong in a way no human that understands the context would ever do.
youtube
AI Moral Status
2026-03-06T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyrueazSjtzj24XgAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMJ6OBlMoi8MxYNrp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUTmxH6NJuLvqg18p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCFCH_d3jsrYWaUg94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxYECh03p6M-WAIyul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxGc3a0Fh99vb9WdoR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCTdlFgzTTfRmyPC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQUxk9iDerVpXbEuF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaBhmbdFIoBm8YkzF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjgRSk9s8pd4LpXut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"})