Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
isn't language just a small part of our reasoning and understanding using our br…
ytc_UgwqEegfO…
G
Also add to this the fact that AI requires massive amount of energy to run. I do…
ytr_UgwmCoohK…
G
8:50 i sometime argue with people that try to turn me into an ai user zombie…
ytc_UgyU2xnEl…
G
Unfortunately they didn't dig deeper into the statement "what does understand me…
ytc_UgxysP1mM…
G
The different is with digital and traditional art you are creating your own work…
ytc_UgyGDu0xS…
G
I've talked to a few artists today, and they all told me they're annoyed and str…
ytc_Ugyce5ou2…
G
AI will be Used to LIMIT Human intellectual ability and be controlled by AI robo…
ytc_UgzisZvQz…
G
Why don't they do something about the actually dangerous part? Anyway, my point…
rdc_o0kcq59
Comment
The one issue these AI people always fail at understanding, whether pro or con, is that there's limits to intelligence. Even super intelligence alone can't solve many problems. Nor does super intelligence operate in some kind of a vacuum. we have super-intelligent people, they didn't manage to entirely solve all our problems or take over the world. They may have changed the world, but that took time. However, if people have access to AI, they should make more intelligent decisions, it's hard to see that being a bad thing overall. I don't believe even a AGI, super or otherwise will be the problem, the problem will be unethical people using AGIs to do things they shouldn't be doing. Besides, control is an illusion, at best we can try to do is manage.
youtube
AI Governance
2025-09-06T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnYcp3LPq0j4tdj4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCK2jJMzpyBD0V26x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzybWNM7qDfr73p92V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy6x0mdJwhuF0eMN5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQLcKdD6IVkuP8ydZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx3FLGRJtcdPHnJXil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAQbJXT-uOOsq0Crx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx05cbZEEb44P85lZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOrDxSX47YPdW-nb94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHxEJVy1trzz6wyGl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]