Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are in simple or "weak" AI! All of this is delusion. Technology evolves and a…
ytc_UgwPTddvt…
G
You fking idiots these fights are not real the robot is just put in the video wi…
ytc_UgwfqfHNk…
G
Wtf does an ai artist do its basically the ai that does it and they dont out any…
ytc_UgzZIR7Rs…
G
The one thing I found a new 'ai' tool that was added to our teleconferencing pla…
ytc_Ugw_yhP50…
G
From my point of view, as an amateur learning to program, it's way better to hav…
ytc_UgwYILdsR…
G
Not buying a self-driving car until I can sleep in the back while it takes me to…
rdc_e13wwl4
G
I'm super interested to see AI develop ethically, with legal guardrails that ref…
ytc_UgzLPYbgA…
G
I can't wait for Ronan's follow up interview with Suchir Balachi to get his insi…
ytc_UgySxNz0D…
Comment
this is hella stupid.
It is so full of holes that you could use each of the holes to the double the brain capacity of any given “LLM’s are totally just 1-3 years away from AGI” proponents. Apart from real world practical stuff, such as people not noticing a data center deciding to build (and somehow power) hundreds of new data center, there are just some theoreticians that need to go and read some neuroscience papers., Eg even if you end up building a skyscraper sized data server in three years and get it to run so that what we would recognize as intelligence manifests as an emergent property of the hardware-software, it is pretty arrogant/stupid/technohyper optimistic to think that *this* form emergent intelligence would somehow know what makes it become a sentient intelligence. There is just no reason to think that an AGI in a huge server would be able to detangle what makes it it tick, anymore than we can explain our own emergent intelligence. Even modern neural networks just run as black boxes and cannot explain their own functions, and an AGI would be millions of time more complex.
youtube
AI Governance
2025-08-14T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw6zQ3tbOI2Y04Cbnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyxSQz13FMxlaATM94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgydsWrUaghYf1ElErt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoKvibx8VavjvHGsd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwguGzCfJs4KwjWZKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC7SL1jgHJUwJMkpl4AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzpqZfwTr4Ya5Z10hN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVHGVKkjc6axcbzI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx9VMoa4XEQAEZpGcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlEhDUifLS8lfcSlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]