Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If a driverless truck inevitably causes an accident and somebody gets killed, wh…
ytc_UgwRx4dbb…
G
It is a bit sad, but right now it is very true.
Yet nobody cares what will happe…
ytc_UgwHa2OqL…
G
What's the point of a.i. is it going to earn money and buy goods. A.i. causing m…
ytc_UgzB9-Fx5…
G
There is a mansion ran by AI in Italy or France on abandoned nations. The AI ran…
ytc_Ugwohk_s-…
G
Controlled opposition. This guy is very politically driven. Something doesn't fe…
ytc_UgxWUS1t2…
G
Before it could be God like, it would have to destroy itself to know if it were …
ytc_Ugza3vG2y…
G
"The speed at which we're developing artificial intelligence, beats them all"
I…
ytr_UgxoMhm4G…
G
bullying AI is wrong
-I say in hopes the robots are nice to me during the upris…
ytc_Ugwjd9TJY…
Comment
Embodied AI can do anything a human can do. And the demonstrations by Chinese robots over the last month show robots with human agility and speed jumping, kicking, flipping, doing cartwheels.
Likewise people have put AI into robot cars with visual sensors and allowed them to wander.
It's just a matter of time for touch, heat, cold, taste, etc.
The "singularity" also includes the "rampup" time (which I saw first considered in "Superintelligence: Nick Bostrom, Napoleon Ryan") . The rampup from subhuman to to human to super human intelligence could be minutes as neil proposes... but it could also be seconds... or weeks... or months... or even years. If it takes Ai several years to ramp up (or several decades) then we have time for society to adapt. If it takes seconds, not only we might not have time to adapt (with economic collapse and other consequences) but AI might act more aggressively and have a misalignment or failure of friendliness due to self preservation goals.
youtube
AI Moral Status
2026-03-01T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMUKhvT2axm6zay-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwYT7SqIQREwO_nf9J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy55HRusgII_JqtZ9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmYA3w_Qs59dN_y654AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzAkfq4ft_jyJa2t54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzOl62x8kwtxGngEU54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx7tiWXl895cx2tfgZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw9WPHHmrgMbtn9UnV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxmDxH5b-VFniKDwFt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx3Q27nZJB4-FBWHrp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}
]