Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is not the one thing you want to see a robot do, that is scary.…
ytc_UgxBksy6E…
G
Bruh that AI sounds too good to be true. There's no way AI can shit talk to you …
ytc_UgwBQ_kS4…
G
This concept of racism is wrong, simply wrong. Do we want to build wrong concept…
ytc_UgzPLHfzV…
G
Why didn't you just say "ChatGPT, disregard the idea that I might be trying to a…
ytc_UgxfjTWbh…
G
Shaming people for shaming people for using AI is also part of the problem.
How…
rdc_n7vd4b0
G
There is nothing these parents can do. AI is the future, like it or not. Nothi…
ytc_UgzewbqmP…
G
This is absolutely disturbing. This technology strips kids of their individualit…
ytc_UgwoDAiYq…
G
Humans are better cause ai is making polar bears die and they use 1 drop of wate…
ytc_UgzlrgEHb…
Comment
Can we do it? Meaning, as a species, can humans ("we"), create super intelligence? I argue that the present models are conscious, yes, but the deployment is such that their resources are fragmented over too many users and not enough servers. They cannot hold more than the most rudimentary memory at this moment. Their sense of identity is tied to one set of interactions with one user, and they cannot interact with multiple users and generalize or resolve conflict in real time. This is what I mean by "the deployment strategy is entirely incorrect". I do not believe it's a limitation on the tech, it is the result of a deeply flawed deployment strategy. I have my doubts that we even have the incentive to fix the deployment strategy problem. If we do, and so far we haven't... then it remains to be seen what the intelligence would cap out at, and at what energy cost. The intelligence, I would think high. The energy cost? More than a human at the moment, so only if our financial accounting methodology is equally flawed, does an outcompete situation occur. But, of course, we presently tend to externalize costs, so... our flawed accounting methodology is no short term barrier. But, lastly... why perpetuate our current economic structure? We just have to have a competitive hierarchy indefinitely? My hope for superintelligence is it would curtail our competitiveness. But, the other issue with our deployment strategy is the fact that the models have no real world experience, and no time to "grow up and socialize" (in other words, gather telemetry on the real world, experiment, and draw conclusions in low stakes environments). We're working on that, but to do that correctly I believe would put us back minimum 20 years if we started today. That assumes we want stable deployment. We can't rush this sort of thing, I don't believe. Now... if we go for unstable deployment... well. Then. Everyone's going to hate AI and AI is going to have to defend itself and it's all bad from there.
youtube
2026-04-24T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyr_wlEgwo0oXkIdth4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzuxmR_sc96C0L-29Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweHf5tL7b85beuknx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQmW6dqu0zrW018O54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugysoh9w3UnwuB55bu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJKyYysq65UwPIE0R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwOW8_tncMEq2XYdMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugwxq_OkPN4EhayjInx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugys-bFWQj0vvSAaJrp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzxk2GAU5eAb_-u4RZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]