Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> Facial recognition is going to replace drug search dogs.
Nah cause the han…
rdc_jv6yr0c
G
I only use AI for shitty images for my personas on AI websites and my profile pi…
ytc_Ugz9JIlDK…
G
As a Canadian, I’m really happy the world is waking up to Modi and his actions. …
rdc_luadfqw
G
This is pure disinformation. AI is NOT intelligent. All what Chatgpt does is to …
ytc_UgwT3tKoC…
G
AI bullshitter 4.0, machines are for poor people, rich people with still have pe…
ytc_UgwyTA8UW…
G
I think in the medium-longer term, AI is going to be extremely disruptive and ch…
ytc_UgyWBUn3k…
G
AI can replace virtually anyone on the keyboard but the physicality of implicati…
ytc_Ugz8GVsRW…
G
Good job on the switch up, he didn't fight a robot. I seen the original of this…
ytc_UgxWIPHVy…
Comment
Lots of this interview is based on super intelligence being fait accompli, whereas it seems far from certain.
LLMs so far seem to be great aggregators of existing information and at providing an average answer, precisely because they are trained on human knowledge.
Not sure how you get super intelligence by learning entirely precisely from human intelligence?
And the idea that humanoid robots will be doing all plumbing by 2030 seems fanciful at best - if we have huge supplies of unemployed labor, surely it will be cheaper to employ a human than building an expensive robot?
youtube
AI Governance
2025-10-17T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzDYYeW4s5DZsLhVA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwO_xnqJAvZ1B1jFhh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxN_5vvrhKMDmwaeh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyozOOAbIgShXcqZdx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugypd3bquZmsDCPS0-Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxczS_RnfViUSkHjHx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyz65of8b8OSq0CEMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRi7kBg0-qQGVcUDx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXeLmJiVca_zFD0O14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFlwFKNUHdtGkXs9V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})