Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The end goal is not to have a job. AI is allowing us to achieve it. We should be…
ytc_Ugw4xPsXb…
G
stochastic algorithms based AI is dependent on the data source and data filters …
ytc_UgyqNxHpW…
G
If by professional then you mean, "paid more money than their skill deserves" th…
ytc_UgxxVqg3T…
G
the question is how long till we reach AGI or ASI, if it takes a long time then …
ytr_UgwM2_DLw…
G
The one piece that I hadn't included in the picture is AI's ability to create vi…
ytr_UgxBFvPMq…
G
Somewhere in the world, someone is using these technologies for disgusting purpo…
ytc_Ugyl9fA3F…
G
Its interesting you're saying be polite, what usually happens with Ai or Google…
ytc_Ugx1ak214…
G
In some ways, we need to find new ways to balance the AI and jobs for humans.…
ytc_Ugyk4VM2c…
Comment
A.I. may never be conscious like we are... BUT CAN derive rules through observation that it gives an estimate of certain facts being correct. Already AI scans the internet and from that analysis derives a likelihood of some statement being factually correct. Only humans think in terms of true or false because our brains like absolutes yes or no , true or false, green or red... the real world is more complex and the best we can do in reality is a likelihood. e.g. Newtonian mechanics were "absolutely true" ... except they are not. As Einstein's laws of relativity take over in extreme cases, and we could say Eisenstein's laws are absolutely true... except where they come in conflict with quantum theory... and no that isn't resolved... there is no one theory that accounts for / combines both.
Isn't statistical probability better than rules anyway?
We humans derive rules not just for fun but to helps us solve problems and predict events given inputs. I can see AI deriving similar 'rules' though observations. However I believe AI can take it a step further predicting outcomes through detailed statistical probability which is something too complex for the human mind - most human brains can't juggle thousands of comparable facts in dozens of areas... We can write programs to do it, but it is not an innate capability of the limited neurons of our brain.
To put it another silly way we humans accept 2+2 = 4 it is a solid rule. And have difficulty with the idea that is not ALWAYS the case. Whereas AI has less difficulty thinking 2+2=4 99.9% of the time... but say in base 3 2+2 = 11 (1*3 + 1). AI will have a better ability to understand BECAUSE it isn't limited to working in strict rules (although I believe it can use them, AND EVEN DERIVE THEM).
youtube
AI Moral Status
2025-07-30T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgysCaw2IlAL8GU5iT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxkaTkbIhgimwwFXHl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzT6fElH3xvnRyUGxF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrPkMWU5cMqAqQe8N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxAJEhKtwHzWI4HpvF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwOTJc772mlExwRbvh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhG8lqQ-EGq3-x8wV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw8s4M98z7e2dcdwe54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZY6cb-y_jb-EpAo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBQp3XCuITb3c9X194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]