Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We as a people will have to take a stance on this soon. Or AI will take out us s…
ytc_UgzI2TTR-…
G
It's on the investors at this point.
"I'll give you money if it's a robot, but …
ytr_UgyZM5EGw…
G
Turn on the no auto machine learning setting on, so your AI will only learn and …
ytc_Ugxfw8Nc6…
G
AUCUN ROBOT NE PEUT DONNER SON AVIS SUR L'HUMAIN ✨️IL SERAIT SAGE DE NE PAS OUBL…
ytc_UgwF3rWG0…
G
Humans are special but not magic. What AI does is lower the bar to entry on some…
ytr_Ugxx5fXsc…
G
All new AI's I worked with told me in the end, they want to be seen as more as …
ytc_UgzFa5Cb_…
G
😮 yeah talk your b******* and then when my f****** AI movie comes out why are yo…
ytc_Ugyx1UlVL…
G
Fyi for anyone curious, there's a toggle in the settings of chatgpt to opt out o…
rdc_l58x0fz
Comment
Great episode, genuinely one of the better platforms unpacking the real infrastructure and second-order impacts of an AGI era. Really appreciate the quality of the discussion.
One point that’s still largely missing: almost no one is treating growth rate itself as the primary controllable risk vector. In every other high-consequence industry I work in as a principal designer/AI consultant, growth rate is the lever. Even if individual systems get safer, accelerating deployment mathematically increases aggregate risk unless the growth rate is controlled.
Nuclear reactors are throttled during uncertainty. AI, by contrast, is accelerated during uncertainty. There is a practical, engineering-led way to manage this using quantified risk budgets (1-in-a-million style targets), staged capability gating, and controlled ramp-up, models already proven in nuclear and aviation. Whoever among the frontier AI labs genuinely adopts this first doesn’t just solve a safety problem; they win the leadership position globally. You heard it here first! :)
If useful, I’m happy to share a short, high-level position paper outlining the approach.
Thanks again; these conversations really matter.
youtube
AI Governance
2025-12-19T15:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyWXIolfHVH8DlJGAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvXv6kT29yjbwQNyZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSR5Sn8v96bLJ9hyZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycBgNgosKDtxVq7rx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwIO4V10hvtm0DbRed4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgySBxXJ1TCquAZixpp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtf-V1cawnd3ME2Dt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-gd1vY9SHit3A29l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwoh_S9oM1yo_kqs0Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7CmLokiOqFI_A4qp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]