Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@chrysalis1670 tool are already released that tranied of massive pile of data o…
ytr_UgySRfSi9…
G
It became obvious when the AI started saying, "DO NOT REDEEM!" whenever prompted…
ytc_UgyBUTmIl…
G
The initial energy requirements of AI is substantial but once the models are tra…
ytc_UgzGZ-Kfn…
G
Is it possible that rather than supremacy, AI will just conclude that existence …
ytc_UgwlJvJOJ…
G
When machines and AI automate and obsolete blue collar workers without their con…
ytc_UgzB55Z6g…
G
Yes! it really suprises me how often people just take LLM outputs at face value …
ytr_Ugx-XKN2O…
G
@KwikBR why tho? Like if you used AI that was trained on images that were allowe…
ytr_UgynQGBmT…
G
However, there are several problems here:
1) Not all jobs can be done by AI! (T…
ytc_UgxtYv1H6…
Comment
Any company that puts winning the AI race ahead of AI safety is a cause of grave concerrn. The AI race is no different from the nuclear arms race except that it is far more dangerous. As Dr. Yampolsky explains: The chances that we can survive such a race is very small. So in effect NO ONE wins the AI race -everyone loses. Would you really want anyone dominating the world (and make no mistake about it AI will definitely dominate the world)- that puts AI safety second? I would suggest that if anyone knows about industry leaders or just powerful people in general who put AI safety second to disclose that information on the internet. I would also sugest that any potential AI investors not invest in AI companies that do not put AI safety first. Speaking for myself, if I can ever get access to my inheritence funds, I will only invest in companies that put AI safety first and that are manufacturing a product that will benefit humanity.
Reply
youtube
AI Governance
2025-09-04T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxSy-LCtmnPPeEGdqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVq_7UdaNLcOpQcJ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwEsOy6UlvRmPP5PzJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyP3qlSPzUZnxrQ_4J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVcTaAxXnTfBaQjq14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQQ1tpARgAfKcqZxJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwhmv3XyQs-p9YezCF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSZoYfwGb3mqCYsBN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzjep0yJrluev5zJuJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzBVPA71YMJH1303KN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]