Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can retrieve information from databases faster and more efficiently than we c…
ytc_UgzrYs4mT…
G
Oh fuck, who taught that AI emotions, if anyone here is responsible for that shi…
ytc_UgzOK1bbn…
G
👋 Hi. Tractors are cool. AI is destructive in more ways than I care to imagine. …
ytc_UgwUZez4o…
G
We all know a large portion of creatives have lost their jobs to AI, but how lon…
ytc_Ugxsg-t-4…
G
Dead internety theory is real. AI-generated articles, pictures, videos, music, a…
ytc_UgxY0Ldkd…
G
The biggest danger of AI is that Government will regulate it, Dangerous? Nope, i…
ytc_Ugx0yZ1hU…
G
*I don't believe AI is 'hiding.' I think its a very useful lie being told for ve…
ytc_UgwVvAmxu…
G
Absolute nonsense/hogwash. Another bearded guru with "sambuka eyes" predicting …
ytc_UgxcEcdaY…
Comment
20:21 "... We can program them to be subservient... and being smart is intrinsically good..."❓🤖🤔😶
I have a positive view of AI as well.
And I tend to believe that really smart AIs will be able to develop a kind of respect for (other) forms of life and the entire ecosystem of Earth. I'm worried about AI that isn't smart enough to understand the world that it emerged from...
Really smart (superintelligent) AI will be able to understand the meaning and value of (human) life and culture.
But Yann LeCun sounds shockingly naive.
He really manages to stirr up serious worries in me.
Why didn't you invite Joscha Bach?
Mitchell:
35:02 "... the fallacy of dumb superintelligence." !
Good!
youtube
AI Governance
2023-07-20T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwavikaAMC_ucQ0x9h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw6IOqZwMcewU2CbuV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtdjIuSgwcMRt016J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw4-agCdVl3pjy4Hfd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHjX_f4QKz6AB1RVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQ0DtmFQxkrFu2X2J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgynLYZwDYaGncX1JJB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJwQ_qPwcXl7w6jjl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxROEnFnRwgta-ItIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy8T12PciZCtmv_UGp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]