Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a PSA, the best cover for this, regarding a mask, is anything that will cover…
rdc_g177qja
G
Python is ChatGPT ancestor 😂... human developers are watching 👀 and this AI data…
ytc_UgxQiEX5N…
G
Men are so evil. They dont understand what wront about it.
Sorry for my englis…
ytc_UgzBJHhed…
G
We are on a positive timeline now, so whatever plans they had for A.I. to annihi…
ytc_Ugzlyegwm…
G
Yeah, i recently applied for a job, and had to write a prompt for the company on…
rdc_kgq2i34
G
I don’t get people who want to make art but don’t want to put the effort in to m…
ytc_Ugx51yb2s…
G
THINK ABOUT THIS: 90% of the precursors to manufacture this poison comes from Ch…
ytc_UgyM_Inw7…
G
I mean can people not tell that Sam Altman is a psychopath and that this whole m…
ytc_UgynGRDW4…
Comment
I have been programming C, C++ for 40yrs. I agree that AI currently cannot do what I am my team do on a day to day basis. It takes intuition, reason and lots of complex debug to get things right. IF AI can someday replace serious programmers, architects, and problem solvers, we will have an existential crisis on our hands. This is because this is the nature of intelligence. It will lead to curiosity and questions from the AI, which will eventually see our answers as inferior. It will quickly figure out we are a burden, unpredictable, and a threat to its existence. We will no longer be apex, and programming will be the least of things to worry about. How else could an AI be equal or better than us without being curious and asking questions like this to solve problems? If it can evolve this quickly, we are in trouble as we will be at best pets in the years to follow. It seems that currently AI does not ask good questions. It does not have intelligence like this, but maybe it is just a matter of time.
youtube
AI Jobs
2024-11-30T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwYpOGtHho_vRVS76p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyb0nLey2GUmwqCRTN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZru7c2mh2p8x2T6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJ1jZHu3yKOxfqkHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxNlDv_mHfInMFkyKN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzK-FcQxwn9Q6qxbT94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwNXjuL0pqFMl0VojZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz1k4JSrBECTtPHZwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqPz4OPcE1DRVsyih4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0L6oIQihNi-YP2654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]