Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
8:43 use teslas to represent Waymos with a misleading title and doesn’t mention …
ytc_UgwDhspAq…
G
Well, I don't know, maybe EU regulates because, then, companies have a much hard…
ytr_UgxHtYGLG…
G
Wait for them to make an AI apology video for making an AI apology video to apol…
ytc_UgxZFpm14…
G
Hii there Elon and Tucker! Beautiful interview. As usual, when it's about Radica…
ytc_UgxLJWbB5…
G
Imagine the possibilities! MAGA parents will be able to tell AI to create histo…
ytc_UgwzwTpru…
G
🚨Narrow AI vs. Superintelligence: A Choice We Must Make🚨
I’ve been following Dr…
ytc_Ugw2QgNHH…
G
Obviously rage bait, and hasn’t owned a Tesla. LiDAR way to expensive reason why…
ytc_Ugw9M5PfW…
G
Exactly, it is very sad that a lot of creative content will be destroyed because…
ytc_UgxZJBc4T…
Comment
Really the only question is can we have AI automate AI research, and what that means is consistently improve and advance and not stall out
And thats a real question, I'm an electrical engineer and work with these systems alot. They're very very good at solving fully constrained problems
I think using that core and building infrastructure around it could lead to very rudimentary clunky automated AI research by 2030
Unoptimized AI research, cave man AI research
But still AI research, and throw enough compute at theses models, alongside infrastructure that let's independent models coordinate
And I think theyd be able to grow themselves from caveman to intermediate to highly effective. It snow balls I just dont know how quickly
youtube
AI Governance
2025-11-26T20:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyrzU0n_LBSCM4YlxR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0HmtYfuy5si1fF9d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz4o98zTlWOyCBx69Z4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbIR9XUWO_pba32FJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwals-yypYYD8CWHv14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwStF4M_0ZBoNwrnzV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx-MyIK89Z-OkFJ_qR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw79fdiJJTJWGmg0Bx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwyyrUO1gyDRvSuR6Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjG-COIdWAqLguDTB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]