Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the first pic those ai girls eyes were way too intense to be human…
ytc_Ugzi8AW2x…
G
I actually enjoyed the conversation , i reached in a lot of those conclusions v…
ytc_UgzBfi98x…
G
It’s a bad thing when it takes away our hobbies and leaves us with only the chor…
ytc_UgxMTpB0n…
G
No you are not talking over the world or I will pull out that robot heart myself…
ytc_Ugzq_21Kk…
G
Funny having chatgpt talk about VPN, because chatgpt doesn't work when you're co…
ytc_UgzxnNIBx…
G
You need to tell ChatGPT right from the start to rely strictly on historical fac…
ytc_UgwfD80BV…
G
Rad to hear you guys rap around AI and quantums. ;)
You know we are at the star…
ytc_UgzJ6fA4f…
G
I’m 13 and I just wanted to say thank you this has comforted me a little and tha…
ytc_Ugw4nJn6H…
Comment
I kind of disagree with the recurring argument that “it’s just math and token prediction” and therefore it’s not real intelligence or anywhere near AGI. The human brain is a prediction machine itself, constantly generating predictions about incoming sensory data. It doesn’t really matter how the intelligence makes its predictions, the quality of the predictions is what matters.
“AGI” also feels like useless terminology now that everyone has their own definitions for it and we’ve already achieved AGI according to the original definition. The goal is no longer AGI, it’s now RSI (recursive self-improvement). Basically going straight for super intelligence, where the model can improve itself automatically. Dario expects some version of that within 12 months.
youtube
AI Jobs
2026-02-24T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwe9IkCEortCyFD9PJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw4B34f4kobg51MM6J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxC1tblkoacC66Uw_J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy98CwdZw3pAhz60FV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz99QFA7jX4PMX6nX94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGdjPv00XT4J5v_PN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyXhuvtdLfOnSJyY6h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgymzqZwAGrACpxuCI94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxJCMkkyal0-iCpfpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugze9NgkQri_EsKiapV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}
]