Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine Comcast customer service AI being as user reflective as ChatGPT. People …
rdc_mlh59ba
G
This is scary. AI can't survive without us so they'll fool us into thinking we c…
ytc_UgwyjvFhW…
G
So let me get this straight: The democrats don't care if criminals and murdere…
ytc_UgzWNiigk…
G
Lady needs to wake up and realize, both car are driven with lot of AI…
ytc_UgxYN2NKO…
G
Try VHEER Ai, try Hanyuan Ai.. watch me recent videos for more details on these…
ytr_UgzXT0OrG…
G
crazy how we will watch this in classrooms in 200 years from now in 2225 talking…
ytc_Ugy9neK19…
G
LLMs are a dead end technology as they are now. But that doesn't mean they're no…
ytc_Ugw7SCNpT…
G
That's bound to fail. Rhino horn buyers are motivated by superstitious beliefs a…
rdc_deu6cdd
Comment
55:06 We release models openly, becaause they are mostly harmless.
They can't design pandemics. They can't design at all. We don't have automated factories for an hypotetical skynet to build terminators. No autmoated biolabs where an hypotetical Wintermute could unleash a superbug.
Current models are just better CAD assist in some scenarios.
The current trend is that making a stronger model requires exponentially more resources. It's the opposite of the singularity scenario of a super intelligence becoming exponentially more intelligent in the same hardware.
It'll take several hardware and software revolutions to get to super intelligence, if possible at all.
youtube
AI Moral Status
2025-10-31T12:2…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzc3FoPlmUo13BjPY14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkjE5TvWv7DeFuViF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5PtLrX3BuN2PtF-54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfUu7tZNYOzfxMjRF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHLr-umR1_GpE6nKJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymNPOjttGRoP6gWWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0FpkSc1Ljjwgy7Ux4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwudaEM1sWDSMh8F8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyE7nbis9oK0bLu-Wh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwt0ssXHCnyjjWW5Ql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]