Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if there will be one day only one spark of a new idea/conviction=>resolution in …
ytc_UgyVJaTZH…
G
The work we put in the even train and produce is also work. So yes I believe any…
ytc_Ugx278BNN…
G
Genuinely what is the point of gen AI? The only reason for its existence is a ch…
ytc_UgwWUrA4R…
G
So, if AI is going to put so any out of work, shouldn’t we lower the retirement …
ytc_UgzNygvXn…
G
The gaslighting argument "ai art is here to stay and you have to adapt or die" …
ytc_Ugyi0YWaL…
G
I told Messi to score a goal at the world cup
...so I basically scored a goal i…
ytc_UgyzccatZ…
G
When they said there are "no real guardrails" I did a spit-take.
You literally…
ytr_UgzIFdCja…
G
Its the three options conundrum. You can make it fast, you can make it good, or …
ytc_UgwAs0SEC…
Comment
I'm only 11 minutes in and Geoffrey mentioned his friends with the two extremes. How about someone like Ray Kurzweil? His futurist views discuss the technological singularity and how AI and nanorobots will make humans immortal. Couldn't transhumanism technology also make humans equally as smart as AI? In fact, couldn't it enable us to nearly instantaneously learn new things that would have once taken weeks, months, or years to master?
youtube
AI Governance
2025-06-28T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxzfUOJ1keoL9Jjpxh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzt7a4uxgpRvR6eP594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxLk9X0RtnLmC70nfl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytFYuE8b08MUGLtHp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwWSfMalkrmYln0cvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwhGaYPkcAGurR0AAt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdI-FBtc79Vj3Vx_B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzs2bJ3Dnx_HwGrh894AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuPBt1JATOk_FxvH14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyivmIDdTInhwXtpp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"resignation"}
]