Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Bubble is bursting and ai models are starting to get worse because they’re b…
ytc_Ugy-CY8rl…
G
I am 52, and I am terrified of AI... It is no coincidence that those with the po…
ytc_UgyNnudfg…
G
Its funny how ai development is more important then the development of the human…
ytc_UgzKgIWLO…
G
I don’t think AI "art" is a problem in and of itself. In one way or another, peo…
ytc_UgwuC1-64…
G
Will AI recognize when established science information in a situation, proves, n…
ytc_Ugz7jjUbi…
G
A lot of talk about human control in case of AI becoming an enemy but I worry mo…
ytc_UgyK7sTID…
G
Worrying about AI as a slave class in a world where we still have human slavery.…
ytc_UgzRKHbBf…
G
The arms race mentality Yang mentions is wild, especially with companies competi…
ytc_UgwydCMVH…
Comment
I'm a programmer 40+ years. AI is 'automatic algorithm generation', that is all.
For those that aren't familiar: all computer programs implement algorithms. Algorithms are simply the logic and flow of control in a computer program to do some job. Current AI tech provides for automatic generation of algorithms based on inputs and expected output presented to the AI framework creating the algorithm.
AI programs are NOT conscious. Never have been.
Could they be in the future? Like Penrose said, not without extensions and changes to the current state of the art. Most of the papers I've read that seem honest talk in terms of centuries before any self-aware programs exist. Like Penrose, I'm doubtful it will happen that fast (I suspect it will happen sometime.. but I could be very wrong..)
youtube
AI Moral Status
2025-08-19T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyLvQOXRn5iARU-oXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwB7gVF2EMcJfO6Pd14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugw_UG_Ne4yndxqnocV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwquRkAtauqE8xc47V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx2PJuYm-9vPV6wllt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzVqCYWYDfCHPmeOYp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugxh8wyVJNFruTVqq5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxjnFHkhXTPmjeBFvt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz8_43XSFJKFt2DiO94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzNOxTMoeJo2zaspLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]