Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This AI no matter how smart is in Plato's cave like the rest of you, even if it …
ytc_UgyXgga_z…
G
That's got to be to fake u can't put a real human against a machine AI but could…
ytc_Ugzw58j3c…
G
So Google played with their own machine learning ai and found out that ai can be…
ytc_UgxPismFL…
G
we can program our own ai to be good ... put the constitution in it and see what…
ytc_UgxnLYBBB…
G
I'll say there's a danger of falling into inspiration fetishism when it comes to…
ytc_UgweaAZOf…
G
Nuclear weapons also have the potential to “nuclearize” and destroy humanity’s l…
ytc_UgztRgdjt…
G
Muy interesante pero un robot 🤖 se vería mejor mecánico sería mucho más aceptado…
ytc_UgzwbJagf…
G
The lesson she should take from that is: replace the writer with Ai and do the s…
ytc_UgwU5L2ht…
Comment
This is one of the most informative interview I have ever watch not just on Startalk but on Youtube. The answers and brevity from the Professor and Nobel laureate is so dense with insights and information. He is really the grandfather of this AI revolution which will impact human civilization immensely. If for nothing at all, I picked one important insight; and I quote "to know what AI will achieve in 10 years, think of what we had thought about AI some 10 years ago and what is happening"... this tells me with have little to no idea the exponential capabilities of AI and LLMs.
youtube
AI Moral Status
2026-03-02T05:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwksWU7Yt_8Sg2YXah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOAD4qiJukiw70jSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxX8IB49EqdwRALIC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgymXjgx53-rSyODUp54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy7D_kMmjKSiTeuSvV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwTnYg_Dok9I7AxJ4h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRl86tEGIR3MsiIZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZLerip2EVmiClYTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwNsb_ZsKUnPGo71wl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyG99URthmp6B5WlbJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}]