Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
could end up like the back story of warhammer 40k IMO
in the story humanity cre…
ytr_UgzBSKJoq…
G
They are all sheeples and trend followers. Hell, NPC monicour might be a good wa…
ytc_Ugz4mNHBE…
G
So I ask AI about vampires and Armageddon because I have weird questions that I …
ytc_Ugza64IAp…
G
Still looks fake. Realistic if she was in a video game but realistic irl? Nope, …
ytc_UgwAF6l2p…
G
AI artificial intelligence department opisyal of government decades stupido cor…
ytc_UgzZ8GaPV…
G
Мужчины здесь радуются,замена женщине😅Чувства,эмоции,жизненную энергию,на которо…
ytc_UgzuWNbx6…
G
Is there anyone in favor of using facial recognition in the west? I mean it's n…
rdc_g14qoad
G
Speaking of this, as i was watching I generated an example patent form via Bard …
ytc_Ugw_DLbWQ…
Comment
It is universally accepted amongst academics the Universe is built on maths an science, there is nothing yet that they cannot explain. Therefore its only logical to expect our human brain to be the same, no different in any way to a maths and science problem. That being the case, human consciousness is therefore designed and built on maths and science which means AGI has to be and must be the absolute same. That therefore only leaves one outstanding question, can the brain be replicated, the simple and logical answer is obviously YES. I did not say humans could replicate our own brain, but maybe with the aid of increasing more sophisticated and intelligent AI models that possibility has become probable and a singularity nearer and the odds increasingly higher. I would strongly argue we have already reached the lowest level of AGI and its only going to get better, Sir Demis Habbabis suggests 5 years for full AGI and I would challenge anyone to claim they understand Ai better than he does.
youtube
AI Jobs
2026-02-24T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxSoB56cRySfXw4umx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"humiliation"},
{"id":"ytc_UgzdI_9jUT2Rjya3kGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjCER6hDLfUK7O8U94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2qo59wSnZT6vZs8V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzar8CeKJ17W8fi7iV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzWflUnjx20y_-KYp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCWe4jciiR-XyjlIh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyKQ-lppRC6y2PDy0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfqTwPptdqcJ6w5_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxcokgkn3B93zOwhGl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]