Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Blah blah
Buy a subscription!
I am training an AI with pirated copies your cours…
ytc_UgwEh3fy4…
G
LMAO, she is asking about fake images lol. She is so clueless as to what AI can …
ytc_UgzIpImJ9…
G
The most frustrating part for me is that I know I'm not a very good artist, I ha…
ytc_Ugybu6eeE…
G
Will AI grow food? Sow seeds, tend to the crops, harvest them, transport them to…
ytc_UgxmEPekY…
G
Its not something youre born with- its not "blue blood". Its skill and years of …
ytc_UgzYb1uem…
G
holy fuck i din't see that ai already got soo fucking intelligent + they look so…
ytc_UgxMKkSwz…
G
When I ask ChatGPT for information, I get it and I'm happy the problem is solved…
ytc_UgxSWz98i…
G
“I want ai to do my laundry and dishes so I can do art and writing, not for ai t…
ytr_UgyWbAiWe…
Comment
While this is a very interesting interview, it is hard to believe that anyone still believes in the idea of super intelligence with current AI so strongly. If you follow the industry, even people who would have an advantage to argue otherwise are starting to admit that even AGI is impossible with current technology and there is no quick fix for this. It is worth noting that AGI is not super intelligence as he talks about in the interview. But perhaps a human-like intelligent AI agent that can learn any ability.
And researchers in the field cannot prove that LLM models can even reason anything. All they do is predict what text should be answered to their input. And because their data set is so large, it is very difficult to separate reasoning from just giving an answer from memory.
By 2030, many things have changed, but I would say the biggest shock to him will be how little has changed.
youtube
AI Governance
2025-09-19T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwMsj9bqLEuSZYRS1Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxzOJ7TxzY7TA8Mmzt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx8RGpZ020sc4vTWFF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCtAGPnM9LtWKZual4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2umnYv4zpQRHEiq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyRTMIUytlqH8Htn4J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyx8xchuPyPbKae4IZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwXvoUjstM4YrJCBwx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxvF79G9kP-WR1oh1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRvutUvhQ0Uz3uRRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]