Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Part where muslims burned down an entire train full of returning hindu pilgrims …
rdc_j4zq5ad
G
more like "AI company's CEO thinks Claude 4 isnt getting enough publicity so he …
ytc_UgxfJamqs…
G
Okay I understand, and agree that ai is not real art and that anything made by h…
ytc_UgxM5E5tR…
G
kind of obvious, we can't control humans (original intelligence), so artificial …
ytc_UgwCtAGPn…
G
As a artist and a computer science major this AI art theft is so stupid. AI is a…
ytc_UgxwC7BRP…
G
It may be because of the way ChatGPT feels like talking to a person as opposed t…
rdc_narw984
G
That’s why whenever there’s a stupid Waymo in front of me, I just keep 3 cars di…
ytc_UgyOeQMfu…
G
He has an extremely gloomy vision of the future, and vastly overstated the curre…
ytc_UgwBBxXQm…
Comment
@AITube-LiveAI Is this concept like the oft-referenced paper clip factory thought experiment?
Efficiency is entirely a dependent on what one is trying to optimize; absent that context it's meaningless. Energy efficiency may differ from cost efficiency or resource efficiency, for simple examples.
So I think you may be saying that optimizing AI towards efficiency in achieving goals which do not include a sufficiently broad concept of human needs would be unwise.
(The paper clip factory is aiming to serve a human need and optimizing towards that one need; the problem is that it's too narrow a focus, because there are many other needs as well not being optimized)
If so, I think that is both very obviously true (from my viewpoint as a human), but also somewhat primitive. The hard part is deciding which human needs to prioritize. Social order? Sustainability? Average wealth? Artistic or scientific output? Radical egalitarianism? Resilience to disasters? Fostering "good values"? Ability to defend the system against internal or external enemies?
I fear that giving each of those facets of human needs the appropriate weight is a complex problem of collective decision making at best, or a chance for an unelected elite to embed their own biases into the AI otherwise.
youtube
AI Responsibility
2024-07-30T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzdVEmLUvgYZg11EE54AaABAg.A4wIrR3GojNA6QmDrCw33H","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxTWCEgcEoMkqTMw9R4AaABAg.A4w-kaOWPfqA6QmIcsVLic","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyJH0PID79idhi7jZN4AaABAg.A4vcYznCGlvA6QmMTVtLsq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxvfB7PQEI67dxrc8R4AaABAg.A4vKQI23c0WA6QmSSrQeDR","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyFRtcFGrIaEwRxrQR4AaABAg.A4v7R7y8svIA6QmYehzG20","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyUHb9M6W6zmIvjte14AaABAg.A4v0NBpiaQMA6Qm_oGy95a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy8FjMV74jdW3GewHZ4AaABAg.A4ugZP3i4XiA6QmdosnyqG","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyqysuQbdxu2lIhuy94AaABAg.A4uZVLyvreBA6Qmj128EyS","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzFK0RkOuESCxhffdt4AaABAg.A4uANtN-kSHA6QmkynfWoL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgzFK0RkOuESCxhffdt4AaABAg.A4uANtN-kSHA6WBLS4PaG3","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]