Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Persons grow, Machines are made.
If a AI learns by itself it is someday humanlik…
ytc_Ugg8IrF1Q…
G
Someone ask that student what he can accomplish now without AI that he used AI t…
ytc_UgxjUaFKG…
G
You know, it's not that weird when you see those super AIs in sci-fi movies and …
ytc_UgykdKH26…
G
I read an article about the response to GPT-5 which said that despite $3tr in in…
rdc_n9qca0j
G
It's funny because AI CEO would either be "We need to kill the customer" or "We …
rdc_n9kpiha
G
On flattery:
You can turn that off. There's personality settings in OpenAI's mod…
ytc_Ugx2z7Y-c…
G
I don't think we have a choice with regards to progress. We are neurologically h…
ytc_UghA24C7V…
G
The primary reason this technology is created is for social control, plain and s…
ytc_UgzRT7g2R…
Comment
It would help if people would stop reading AI as "Artificial Intelligence". It's not; it's *Artificial Idiocy*. A digital text or image processor cannot possibly have the same groundedness in reality as a physical organism which is capable of experiencing pleasure and pain. It cannot begin to understand humans, and if it becomes (in the clichéd phrase) more intelligent than humans, that will only be in the sense of able to process data faster. A huge amount of what intelligence really consists of is beliefs and the ability to question oneself, to fit in to a social environment (containing both machines and people) and to be capable of compassion (people incapable of compassion are known as psychotics). So your message is that the human world will be destroyed by the ELIZA effect: people believing machines to be intelligent, and giving them greater freedom to act, when in fact they are not.
youtube
AI Governance
2023-07-16T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwjK5kyGRiovHFHJ-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPZENPzaUiXx1IM0t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzP7Yu3RhFIbBnr9kZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzrREnk22YYc9YWiqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdgDvbS2HMdFIL5LV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyC_0_b_eGjM1zNVNV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjdYZIRdZ-UA1fnvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9_RQDbcZ_nBUD5HJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyD9x4G-LjMXABFK8F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyO-c3QESGUwRcpF1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]