Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You realize they use machine learning to move these robots, and the neural netwo…
ytr_UgxZZS0fA…
G
Maybe in 100 years from now. I think the so called "Godfather of AI" is an idiot…
ytc_Ugxenq-8R…
G
AI is just a database which we put into it and it increase its knowledge about t…
ytc_UgwO3Ov7Z…
G
One of the things that bugs me most about the ai conversation is that it’s not j…
ytc_UgzjX_Ejj…
G
Yup.
OpenAI wants their products to be used for this, but they aren't quite rea…
rdc_o6wn83f
G
1:53 Wait, so if there are two different types of AI art generation, why do you …
ytc_UgwhWamWF…
G
Thanks to AI,our company that is all for sustainability has shifted their mind s…
ytc_UgyqF3rYP…
G
>Decrease the numbers? Sure, seems reasonable.
Well, lets look at every oth…
rdc_kz09068
Comment
I think of self diagnostics. If we wish to control what is going on within an AI model, with trillions of parameters, then self diagnostics must be built into the structure of the neural network. This would take exponentially more compute than is currently being used. The AI ITSELF has no knowledge of its own inner machinations, so how could we?
The growth imperative and arms race between AI companies will never result in building more carefully built systems.
Good computer software code is FULL of unit tests, but even those are severely limited because they only test for known metrics, and are useless to solve emergent problems.
The only "safe" method would be to compartmentalize AI into suites of collaborative narrow AIs managed by a simple, controllable core. The current method of simply creating larger and larger neural networks with more data and compute, resulting in amorphous, mysterious black box "brains" is the opposite of control and by definition makes transparency, a requirement for control, impossible.
youtube
2024-07-07T19:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzhACq6DEYufyNfLKt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwkrip8JRd6eOssIt14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZq9eumJCEcYP6Zet4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxQSFa7BM9aCz7Ach4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwngLQHf2pAOx-l8l54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwyGaCd7URee5pIY8N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpjHcWlIN2wvHgYaF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxccOYzbNx7_03SWjh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy5fICEj6a2Eg9J-3d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy3m41-I2NaoaECF_V4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]