Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just smear the shit on your face, so that no facial recognition tech in any city…
rdc_ennaxa0
G
I have been scpetical before about Steven's podcasts - BUT: I have to say, this …
ytc_Ugxxi-uu0…
G
People need to learn more, before sharing their versions of "facts". Also, many…
ytc_UgwXelQvx…
G
Bro, I’m actually serious on this, ON GOD. I fucking checked how much hours I ha…
ytc_UgyNFHcb-…
G
Ameca and Sophia are two of the most advanced humanoid robots in the world. They…
ytc_UgyPZDw5m…
G
the most interesting tipping point for me is the timeline to when AI robots will…
ytc_UgwBzBoUn…
G
If you think that burning down forests is cool and good, then gen ai is for you,…
ytc_UgwrFnrbK…
G
It's also not a good feeling, to stand at a convention and having an odd feeling…
ytc_UgwzkcOKs…
Comment
AI might plateau around human intelligence... sure, perhaps. But you can still always run 50k "around human intelligence" agents in parallel, 24/7. An "around human intelligence" general intelligence that can also create perfectly realistic images, sound clips, songs, videos from scratch, and also beat every human on earth at nearly every game invented with ease, and also has instant access to the sum of all human knowledge all at once in its head, can gather data faster than any human ever has, can reconfigure its own "brain" at will, never ages or dies naturally, and can speak nearly every human language fluently.... Let's just say that's still something big to contend with, and thats all under the assumption that AI will never pass "average human intelligence".
youtube
AI Governance
2025-11-26T20:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyrzU0n_LBSCM4YlxR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0HmtYfuy5si1fF9d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz4o98zTlWOyCBx69Z4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbIR9XUWO_pba32FJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwals-yypYYD8CWHv14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwStF4M_0ZBoNwrnzV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx-MyIK89Z-OkFJ_qR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw79fdiJJTJWGmg0Bx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwyyrUO1gyDRvSuR6Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjG-COIdWAqLguDTB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]