Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Giving rights to AI is the first step towards us becoming subservient to them. P…
ytc_UgxtAGPwz…
G
This robot is smart.. One day she will control herself alone and we have to go t…
ytc_UgyeaWRHj…
G
Oh please guys. Have you ever tried to use AI after switching off your internet?…
ytc_UgyRs_22n…
G
Too many people are worried about Artificial Intelligence while they completely …
ytc_UgyLGiUEu…
G
No need for a helicopter in siberia, I can easily escape seeing anyone for days …
rdc_d2xmc6y
G
Very interesting dialogue. His concerns and forecast for the future (in an aroun…
ytc_UgzVxuPxU…
G
This is by far the dumbest idea ever! I don’t approve! What is wrong with old fa…
ytc_UgwrhNcDi…
G
AI was an upcoming issue for at least half a decade and governments failed to le…
ytc_Ugwdv3fwl…
Comment
TL/DR summary: It's unsettling, but I believe a step in the right direction to provide AI safety controls is to research and identify what the human brain uses to set the weights and biases, If that is something that can not be learned, coded, modeled, etc. than I believe humanity has a chance to come through this AI revolution stronger and better. What is that missing link? Religion? Our parents and family and the environment we grow up in that shapes our beliefs? Could this variation in our brain "models" (this diversity) keep us from being subjugated by our own creations?
I'm not religious, but I wonder about it often. When I get too curious, I am repulsed though. It may be oxymoron, but religion feels too "secular" to me. Maybe at one point in history it was based on truths in the universe but it has become another political party or movement that is used to have power over others. It is secular in the aspect that it pits people against one another even if the foundation is good and peaceful. Iran vs. Israel with US stepping in? Is this not secular use of religion to manipulate people's actions on a massive scale?
Despite the repulsion, I think something like religion could be good (in the good vs. evil context) just not always executed upon properly. Perhaps this higher power, or spirit, or god is the system of our minds that sets the weights and biases. The source of our training. How we observer react and embrace or despise things in the world. Could it be the missing link?
One of the things Geoffrey Hinton said (and I paraphrase, so I will have to watch again). I believe Geoffrey Hinton said one of the differences between how our analogue network of cells in the brain and a digital AI model is that we do not know how the "correct" weight is applied in the brain.
I train visual lightweight AI models called LoRA as a creative hobby, and have observed similar. When training, we use information created and observed by humans as the source material. Yes the amount of information and speed of information access is a different scale, but in the end I make decisions when training what information to emphasize and what is not important. But, How doe we "know"? Observation? Experience? "Gut instinct?" What are these things really other than chemical synapses that have connected as our brains develop? Maybe I've just argued myself into believing in something higher like religion.
Something external to us that is the source of setting weights in our brains. OK, noone read this far. and it's time to get off the internet before my brain is cooked by all the crap out there! Off to take a walk in the park...
youtube
AI Governance
2025-06-28T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzV7DjTqxA_U4Z5hzx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoBKozXNkBVzLETad4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxg9d3S6eAvcsZbWTp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsSd9ghFbqtsxgQSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRnNfDXJ5YcSk5kuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz0ARW4iqJwAAvU1Lt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxQzeCHVHBusX1lVqF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwqG-wIKkYfjmq1aqV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOqQytA2OZboXd2md4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyq6vQtDW3O2v_WHHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]