Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@total_noob_tankernoob4225 _They collect way more data_
Not so much. Waymo has …
ytr_UgwfuoSFI…
G
we have trouble recognizing AI videos, we will have trouble recognizing these ro…
ytc_UgzWTwjfu…
G
BRO IT CAN FIRE A GUN, CHAT WERE COOKED THE ROBOT APOCALYPSE IS NEAR 😭…
ytc_Ugz9YKRDk…
G
What i'm trying to say is that AI tools with a human behind it with real intent …
ytr_Ugwt9DKfh…
G
I think AI art is fine on its own, but these weirdos trying to pass it off as re…
ytc_UgwcH9FQy…
G
Hey hey boo boo looks like we got some quantum physics working in there with the…
ytc_Ugwt9iiUk…
G
All technologies were developed to solve problems and or create entertainment ..…
ytc_UgxBCTWgI…
G
As horrible as a situation as this is isn't it kinda good that its happening to …
rdc_kjo2y0g
Comment
AI has three big problems: sufficient methods and economics to reach it's current demand within a short time frame. sufficient free accessible (plagiarized) knowledge to feed itself. And then enough energy to sustain it's progression. Currently the first one is taken care of: investments and future prospects are incredibly positive. The second one is fallible because language models have already corrupted the source where most models are based on (knowledge limitation/bias). Thirdly, unless we find a way to make an AGI with the same efficiency that is needed to power the human brain we can't do much progression altogether. It simply takes too much worldly resources (electricity and water) to make it fit in a profitable concept. But then again, this can all change quite quickly, if you're positive about it like this guy.
youtube
AI Governance
2025-06-30T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyBFRctol1IK9DsL6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQjuYGjYgC3FX9bIJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwHtlTRkSFYW1CqN7p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzunyJrA1KiATZPerV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5Uvc1_OMhapazzY94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIirZbEIv7tvuOX554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxDxE2ECA7AzTz6k14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBoQc2DqzllkIma9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCLBEJAZHeRl4oaYV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHnBCF3GaT0MDB6nF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]