Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now AI is 0.3% and when tha ending days will come it will go to 100% you know wh…
ytc_UgwFtRpL1…
G
"take over by skynet" cliche
If said can remoted by car company or can be hack …
ytr_UgwCb16X4…
G
I work for a company that's essentially outsourced admin (except we don't suck a…
rdc_nlv370g
G
The difference is that models that create better "AI art" , as midjourney, are t…
ytc_UgwK9oDgb…
G
Ok. For some reason this video makes me want ChatGPT to come out with an English…
ytc_Ugy9rJ-Bi…
G
The question is how often are the AI wrong, not do they profile things. Since pr…
ytc_Ugxb9FMSK…
G
Unregulated AI is a clear and present danger! 🤬 Greed Rules, they do not care!!!…
ytc_UgxrZkNkp…
G
If we are living in a simulation then what is at stake related to AI?…
ytc_Ugztbu0zN…
Comment
If it’s actually “predicting” which word comes next, it has to be programmed to search for the answer in a specific way (ie which word comes next the most often across the internet). If that’s how it works, we should understand how it thinks, and that wouldn’t make it any smarter or knowledgeable. If it’s an algorithm, then they’d know how it thinks. You could argue you don’t know WHAT it thinks because it’s such a long algorithm full of infinite inputs. But how can you possibly not know HOW it thinks? Clearly, it’s not a man-made intelligence. It’s something else all together.
youtube
AI Moral Status
2025-12-13T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzkwheJMmDwLhuJIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwm7BCarjgEsuogN-d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyLnHTzsde2_R1O78F4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyAuTgkIE5_EI40t_p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMmAxKmi5eCT09YpV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwO6Ow4pDaH5gOOv0d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLH9vclTCIExOiHCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzhAs62KNIMIA3wDTN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyh926fxhvk8_KxFqx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmK_MPix2eECfbd1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]