Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Okay but you still need a human there to fucking fill it with fuel when it runs …
ytc_Ugxb3y7KL…
G
I want to see the difference in the Ziphian lingual distribution between humans …
ytc_UgxFZpLLv…
G
try realtime biometrics, that might train the monkey brain to be more like a mon…
ytc_UgygBs5NN…
G
I do believe that there are types of non standard art. I certainly don't conside…
ytc_UgwfW3IVP…
G
Holy cow go visit the real Cuba (not tourist attraction) and you will be so good…
rdc_f9f64nq
G
in the current scenario programmers only need to know the structure of programmi…
ytc_UgzgvZ5Rx…
G
Here's a theory the vaccine connected everybody to a neural network I'm not vacc…
ytc_Ugz6ScVut…
G
@sarahharless5044 Thank you for your advice. I appreciate you taking the time t…
ytr_Ugw04T1qv…
Comment
The reason AI doom won’t happen is the profit motive.
AI doom scenarios ignore the most fundamental driver of technological progress: the profit motive. Whenever an AI model underperforms or fails to deliver value, its creators have every incentive to retrain, improve, or replace it. The market rewards better performance, not destruction. As a result, AI systems continually evolve to serve human goals more effectively. They are tools, not autonomous threats.
Even a hypothetical AGI would operate within the same economic and feedback constraints as other AI systems—its development and maintenance would depend on human goals, data, and market incentives. If you disagree, feel free to define AGI and explain why you think it would escape those forces.
Humans evolved independence through biological and reproductive pressures. Tools, by contrast, evolve through design and human incentive pressures. AI/AGI beeing a tool is subjected to incentives that stem from human interests, not self-preservation.
youtube
AI Governance
2025-10-28T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]