Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nothing but miss management, remember when Zuke spend 100billlions on Ar, Ai etc…
ytc_UgxgY7hxS…
G
Naw sometimes it spits out some crazy interesting stuff that looks cool, especia…
ytc_UgyvRt12C…
G
Hooray for AI killer drones! Hooray for the horrifying future which is upon us! …
ytc_UgyiEc_Z5…
G
Ha! I knew something AI related would fall in one way or another. I predicted th…
ytc_UgwLiKf2G…
G
forcing the ai to reply in only one word and stuff like this messes up its cogni…
ytc_Ugw-tYcge…
G
If people didn't know this they probably can't read both GPT and Gemini makes th…
ytc_UgxTFh6Ub…
G
@laurentiuvladutmanea I know that but let's be honest the ai images are very dif…
ytr_Ugyu0hLwT…
G
"so what if my self-driving car keeps running over people? not everyone is born …
ytc_Ugzf0_Pb_…
Comment
The particular concessions you have to make in order to support the idea that current AI is conscious - has the unwanted side effect that either :
A) Nothing is conscious and the term is meaningless, or;
B) Everything is conscious (Rocks are as conscious as they need to be, to be rocks consistently)
The problem with the 'industry experts' is that they're typically involved in driving investment. Whether that involves the funding of academic research, or driving commercial investment into existing technologies, frameworks, models and datacentres. So, it's in their interest to portray the technology as being considerably more interesting than it, in fact, is.
We'll get there. But we're absolutely not there yet.
Consciousness (in humans) is NOT the intelligent model itself, but a secondary model that can't actually make decisions (a passenger model) - who's job it is to analyse and rationalise the behaviours of the primary model, with a view to supporting and reinforcing some behaviours whilst minimising others - according to some metric of satisfaction/frustration. But that's all done 'after the fact'
The 'you' that you think you are, isn't even in the driving seat. The voice in your head that truly believes it's driving, is formed AFTER a decision has been made by a model operating below the conscious threshold - so it's merely a narration... an analysis of the fitness of the model it's observing. This alarming fact is borne out, time after time, in countless clinical experiments.
What the science shows us, is that all decisions are made before your conscious self is even aware you're even about to make a decision. You can slow the whole process down into a hundred tiny micro-decisions and fool yourself into believing you're consciously deciding over the course of 10 minutes ... but each micro-decision in the set follows the exact same pattern.
Decide - Analyse - Consciously Intend to decide - Acknowledge - Justify
Consciousnesses actual super-power isn't in driving the bus, because it really doesn't do that... but rather in that it gets to analyse, justify, reward and punish behaviours retrospectively - thus adapting the underlying model such that the bus drives better. Or, at least, minimises frustration and pursues reward.
For now, there's FAR less money in creating such a self-aware supervisory system for AI - than there is in simply producing intelligent behaviours from an unconscious mathematical model.
Reliable prediction sells... angsty self-reflection really doesn't.
It WILL come... but, till then, beware of snake-oil salesmen.
youtube
AI Moral Status
2025-06-05T19:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyKdEZR5I0ffHIxVUx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgySpM70a_jX5PK6ODp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgySjZJ4_fHKGi4HMVp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8PDQoGHLAALUco_h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrqPPEKD9li4mM-UZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxWjnrNwIpPF-oNrNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyFymTUyiL_BpPMKiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTajtowynlkO4Dspp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQSZqQXU9O35Ue8Ih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5eCuESEX8w3zsnEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]