Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ineffablemars I’m “special” enough to realize I’m not that special, no human is…
ytr_Ugzim0Xfv…
G
Literally did the same thing looking for some simple distilling recipes. Somethi…
rdc_jhgfzac
G
@badmike2528 Of course you would test a system before you used it, but in testin…
ytr_Ugz52rg38…
G
Yeah, as if the government can't provide a solution lol. They know "wiping out" …
ytc_UgyAWwu9n…
G
the AI is a lie and man can not trust himself to destroy himself and the world, …
ytc_UgwF-19EE…
G
this is just a gimmick being fed pre made questions and answers. the founders we…
ytc_UgyrP1qtE…
G
as a data scientist, I told this to my manager. Ali won’t replace rather augment…
ytc_UgzEJRIlX…
G
If you want us to survive this AI revolution; AI has to have human qualities in …
ytc_Ugz1uFG-I…
Comment
Whats never mentioned is that humans - unlike AGI - have a huge range of innate, 'biologically' programmed instinctual passions driving their barely intelligent thoughts, intents and decisions.
So unless we can upload those passions to ASI it will not be able to 'want' (desire) anything or 'decide' (with mental foresight) to 'wait' as though it has a concept of time, before irradicating humanity. What would drive its passional intent? Given we cant even offload our own emotions and mental delusions let alone upload them to AI ...yet.
Our emotional malicious and benevolent patterns are related to neurally transmitted sensate data. E.g. fear of bio pain, love of bio pleasure, hunger etc etc. So before antropomorphising AI, perhaps describe first what would/could drive its feeling driven wants and intentions that might enable it to decide to 'wait' and freely choose when to destroy humanity?
Okay so all would take is psychopathic humanity - all thinking the same idiocy - to program it into behaving mass destructively, no passionate equipment required? 🤔
youtube
2025-09-05T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwprATfFV36HDtMryd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQQD1DH02Ch4ywd5F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvSfnbJpdRu6ptCHR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1ZTEOhLM3wtuZjAB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkGC0CE_7Lt4DWmxR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGDwPcMoRiQbeUhAd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQ9Db389WW2yzzBCF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymQt_83X-2JdfliQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5b_1ODkaHnfvmbMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIxh9EARldj4G_Aep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]