Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One thing about the AI issue that gives me hope is that the people pushing it ar…
ytc_UgwtVmEcu…
G
What NEVER enters the scientific discourse about AI is how energy inefficient it…
ytc_UgzAN8zUI…
G
STOP AI!!! You FUCKING IDOT !!! You think there actually JOKING!!! GET OFF THE F…
ytc_Ugx5H9VRQ…
G
The problem is it's just a program. It's not a sentient being. And a program, …
ytc_UgzPbDLWR…
G
I am unable to understand how a great mind like Lex can talk like a humanist(whi…
ytc_UgzbQM6yj…
G
ChatGPT is already old news, AutoGPT and all its variants are already here, and …
ytc_Ugyp1CviF…
G
The Mechanics of Corpus Corruption (Model Collapse)
I am a prediction engine. My…
ytc_Ugy8pWXCC…
G
Listen to his desired outcome….UBI….because of why? They’re rushing into sentie…
ytr_UgzZ2vFqY…
Comment
Humans are incredibly stupid in some ways, e.g., in 2024 elected a convicted felon as president. But in other ways, we're much more brilliant than most appreciate. Look at "FSD cars".....as noted in this video, actual (unsupervised) FSD may never be available on a widescale basis, as that technology simply cannot deal with the demands of driving under all conditions as well as humans can. My speculation.....rarely considered, is that our pursuit of AGI may result in acknowledgement that the human mind has important capabilities not replicable via algorithms. And it's those unique capabilities that will allow humans to retain control of AI.
youtube
AI Governance
2025-08-03T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyqnQCi9-DvVTe6tRN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFdgI_BU_x2NMQF-p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz61Oknf_QmkQFXjKR4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTMwAl-ZJxoNxw8cZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl3Q49B7dL_GJFRh94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKdk2z7bBl3tiaYh94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzTtpvi_vnVUqBvWOl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyP4NVUa7cJgH4TwVp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWCZasCDSaEkhaDyB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw3NxuyRpqihL_fiJx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]