Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, it does bundle all of our knowledge together after all, so AI is obviously…
ytc_UgxD7Z7rS…
G
I guess we can't read books or watch movies because it's all just "scripted." Lo…
ytc_UgzJylz9B…
G
These are prophecies the Bible spoke of in Daniel, Daniel spoke of travel, enlar…
ytc_UgzJOee6J…
G
@TheAngryDesigner Because they are limitless. Knowledge is power. The more you k…
ytr_UgziTMJMi…
G
You have described the perfect reason to use AI in college. What you learn doesn…
ytc_Ugx5pKApK…
G
EXACTLY
we dk how AI works. Anyone who says they understand any part of it 100% …
ytr_UgwFlqVOw…
G
Such a dramatic story from a dramatic scientist 😂.
Things are being automated a…
ytc_UgwVcL62K…
G
You're right, mate. Absolutely, John.
If AI becomes smart and curious enough to …
ytc_UgxfPWDoz…
Comment
I find it quite frightening that people defer to computer programmers on the question of whether AGI is potentially dangerous.
“On 29 December 1934, Albert Einstein was quoted in the Pittsburgh Post-Gazette as saying, “There is not the slightest indication that [nuclear energy] will ever be obtainable. It would mean that the atom would have to be shattered at will.” “
Read more: https://www.newscientist.com/article/dn13556-10-impossibilities-conquered-by-science/#ixzz6UgkFDUHG
Einstein was completely wrong and he did not even have the strong economic incentives to be wrong that AI researchers do.
If you asked Henry Ford whether all of these cars might cause climate problems one day, would be even be motivated to listen carefully to your argument about the risks?
Philosophers should be the last people to just defer to engineers on a question like this where the survival of humanity is arguably at risk. Engineers do NOT have a good track record of predicting the risks of the technologies they work on and AI researchers in particular have a very poor track record of predicting the rate of improvement of their own field. They were blindsided by the efficacy of neural networks for vision tasks and then blindsided again by AlphaGo. And on the other hand they hand made grandiose promises about self-driving cars that have not come to fruition. Nobody knows how far we are from the key breakthrough. It could be a year, if could be a century.
To demonstrate that machines are not “really” on a path to intelligence you will need to define intelligence.
reddit
AI Moral Status
1597035982.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kykw5yc","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_kyltinv","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_g0y7v05","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_g10p5cs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_g0ys5vt","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]