Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The British yet again. Why do we invent everything important and win at absolute…
ytc_Ugx2xW7-l…
G
Forget about musk Trump and AI.....
Where are these articles that are so far lef…
ytc_UgyqjtMP6…
G
@Mirakelpung Your understanding of how AI works is wrong. It doesn't copy/paste …
ytr_UgyaOPOMM…
G
I can't wait for the AI bubble burst so we can put everything on the side and ac…
ytc_UgydQkSLd…
G
As a user of chatgpt i honestly think this is fake. The responses are not consis…
ytc_UgySkamBZ…
G
It gets worse...I suffered a real rape that was deepfaked to look consensual and…
ytc_UgwCXv05F…
G
HMM!! really!! At what cost such as subservience to the State providing the sus…
ytc_UgyFmmvEo…
G
AI is not human. We have to get everyone to understand it is not human. It can n…
ytc_UgwGfJ2_H…
Comment
I enjoy your interviews, usually. :)
One thing I would like to point out, is Bitcoin. If super intelligence takes over the world, how does Bitcoin remain apart from it? It wouldn't and thereby be worthless, no?
Second, AI is growing exponentially, I see that. I'm curious how we get to super intelligence when these systems are based on human knowledge and text?
Many times AI gets it wrong, even on the latest models, for basic things in my career. I correct it, and another account asks and spits out the same nonsense answer. At some point you still have to get past these mistakes. These models are build on imperfect information at its base.
I admit, I don't know everything, but this feels like a basic problem that should have already been fixed. I correct ChatGPT all the time, daily.
youtube
AI Governance
2025-10-28T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxd28-nsbZS8iTkv2R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxGCz4z9Ubs7ngm_t94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgybJfqi6ntWWPpIjip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKW2DrgCuGKtzx-up4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwGrmfyXf-CY_fwwFV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx_m41Vxg7Wl6n0SmZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKE5q6vqoiFW7r2AF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3pF2ltKMmXLYrgGJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXsLVuB2GJ4NZ7eqp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyS197qz3bxxjU4cDZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]