Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI generated images would've been great as a step when making your own images, b…
ytc_UgxRSklYA…
G
and now all humans have to pay for that data they learnt from and have in their …
ytr_UgwTu1RD-…
G
Even if the majors adopt AI to write scripts, they will still need writers to fi…
ytc_Ugw4Fi7xV…
G
Yes, AI is good if you know what you are doing.
It will improve over time.
On th…
ytc_Ugz2d-CAm…
G
Man just like 3 years ago it didn't even really exist. Plus when you're older, y…
ytr_UgzQSVy38…
G
merikkka winning the AI is super terrifying as well...imagine the horrors of CIA…
ytc_UgxEIlC-D…
G
Imagine how AI get access to databases and museum vaults. Did it manage to disco…
ytc_Ugydgao5b…
G
The more of a dam that is removed, the more water, has a damaging effect. Infant…
ytr_UgzpdL94K…
Comment
As a psychology professor, I argued (on my own channel) that superintelligence is already here. This person makes unrealistic claims about humanoids. The thinking is not the problem, it is much more about dexterity and the mechanics. Currently there are no robots that can generically grasp stuff well, so plumbers and so on will be around for much longer. Also, the issue is not so much about superintelligence, which we already have, but about other psychological factors, such as values, ideas, aims, goals. These do not "suddenly" emerge in AI models, but are uniquely part of what makes humans special. Although you might possibly able to copy those into systems (in the sense that AI chatbots know what things are bad for humans), these things do not supersede humans. Also, superintelligence does not necessarily mean that humans cannot understand the things a superintelligence system thinks in the way a dog does not understand a human being. It might be, but it is not necessarily the case.
youtube
AI Governance
2025-09-12T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzEUX_jK-Wz3lVANUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgztyYKXBFaTxZItqzF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxolkS0Sou4RvX8ByV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzx0KhWyB4FbdTX3Kx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmA55eK3EkmGCumdV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCFUWH39eaUkL6Zmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBspqQLBC34hJ17SV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXnzvrF6Dda4zu-7N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLArpRoOSfJNR0-gV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoWNQwI9FkIlDxj_F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]