Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Been seeing these predictions since the 80s. AI will never supercede human intel…
ytc_UgygMyzRe…
G
@stevesmith7843 The more human-like AI becomes, the more important it is to show…
ytr_UgzZvf3NK…
G
Totally agree! As an artist, I detest Ai, but I'm a realist as well, so I'm curi…
ytc_Ugz_dS9dC…
G
Whatever u say, free money is free money. Ya'll gettin pissed cuz it stole ur jo…
ytc_UgwDRm6Rv…
G
How the ai sound when make threat to call center ? Is become joke because not so…
ytc_UgyM7Cym1…
G
Let's assume this AI slipped out into the internet and can sustain it self with …
ytc_UgwRIvK1m…
G
Nah ai sucks, it’s just abysmal in almost every way, I’ve never had anything be …
ytr_UgzeVRPQn…
G
The big problem is IT ISN'T AI. Computers have not achieved true AI only emulati…
ytc_Ugwyq0h2l…
Comment
My problem with this conversation is that it's based on the assumption that the claims of AI companies are possible with current LLMs, which their not. There hasn't been any real evidence that these models have the capability to make real decisions or do anything that they're not prompted to do. Anthropic being a case of false hype with Claude hacking other systems on its 'own volition' or these chat bots talking with each other only by scheduled prompts by the engineers. Calling these things AI while they lack any intentionality is one of the biggest scams of our time. The don't really make meaningful decisions nor do the 'care' about those decisions b/c they don't think, at best LLMs just predict outcomes based on the medians it sees in the data it's fed.
youtube
AI Governance
2026-04-23T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwu9E55sHtIzDnp7kF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6CPPL2XGqS8kW98d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgydKhbgHdCHABz_vOp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz89ZLTjT8XFK-WGPx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjPPySEvO2tri55Dh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwZf2ld-hXOu6Wgrjx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxH6wg-loRRaPphRhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRWJhZ-KN0MpJRMLd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyyWxtVOZ7yKxT9jOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxlPIFgnBao9k4lnVJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]