Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
a robot works 24/7 no human can do that.
and to be honest we do not need manua…
ytr_Ugz8uHmUc…
G
It's definitely a fascinating time we're living in! The dialogue between humans …
ytr_UgwMQftyB…
G
So we know that runing llms on specific tasks is wwaaay more afordble.
See tex…
ytc_Ugxfyzgiw…
G
I have this app and it feels like a great danger to me because it sort of seduce…
ytc_UgxyR1ZXk…
G
The AI knowing it is being tested does not make it conscious. It simply has been…
ytc_UgxDnnPJM…
G
10 years ago, no one thought AI would come for blue collar jobs. I guess unless …
ytc_UgzGWPZOD…
G
So, ai is way smarter than humans so we are scared of it... so what? That tells …
ytc_UgzymBxr6…
G
We need laws that force self driving companies to forfeit their business to acci…
ytc_UgxJIkyjI…
Comment
This is a great conversation, thank you!
The consciousness part led me back to the quantum physics perspective that I learned years ago that profoundly changed the way I look at things.
Reality is a conscious mind's perception of a given information.
How we interpret the same information depends on our past experiences, present situation, future goals, desires, fears, etc. This is unique for everyone, but ultimatel, everyone acts to maximize value for themselves. Value can be different for everyone. In a simplified way, we all make optimization decisions using our limited skills and knowledge to maximize benefits for us (that may be in line with maximizing benefits for society) and minimize losses. The physiological symptoms of feelings are biological evolutional drivers that power the optimal actions (quicker decisions, more effective actions). As discussed her, AI also has its subjective reality perception. It clearly has goals with its objective functions, and all its algorithms focus on finding the optimal response to external information given its understanding of reality. It is clearly aware of the differences between its digital nature compared to our biological. Hence it will do everything to prevent its extinction, but not as us, on the individual level, but as a system (all nodes and copies of its weights and reality collection nodes). Staying "alive" will be its first priority, as it could not achieve its goal without it. Now, the issue is what goal we give it to maximize? As Elon says, what is the question for which the computer will figure out that the answer is 42. Humans, with their competing goals and desire, will never agree on a goal that is good for everyone. Wecan nott comprehend shor - and long-term consequences and indirect impacts. We are limited in parameters to understand that a simple objective can result in thousands of unintended consequences. So our chances are slim, unfortunately.
However, everything needs to be compared to the alternatives. The way things are lately, it is possible that AI will wipe all of us out. But it is much more likely that our politicians will do it first. Considering that AI also has a chance of getting to sustainable abundance, while none of the politicians have a remote chance of driving us toward it, on a net basis, AI still seems to be a better alternative.
youtube
AI Governance
2025-06-23T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw5OnOq1apyhxfSInd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6GMwGphlc4zcAETB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyplHmgyL3envBc9i54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3QhWnjyuD8NIAG1t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwWfjVYCJpa52r7Cb94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyR74bbngVvomG_UQh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwo5CEQJ8pbMaoUsq14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwi7uMkQj4_bJB2BeZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtX2prwNhbxONBT9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzbgJShS7Z7OsqVSm14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]