Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now if ppl did a AI of yt becoming teachers just to date minors they would ban t…
ytc_Ugz0xTTNk…
G
Communist! Seriously who decides what is good who decides what is bad? Arrogant …
ytc_UgzNDRyrB…
G
Self-hosted, local LLMs folks.. therapy without uploading any of your info anywh…
ytc_Ugx13EHdt…
G
@davesanders95 it will stay that way, LLMs are fundamentally incapable of turnin…
ytr_UgzF4EYAm…
G
Even with all that, just imagine how many hundreds of billions were invested in …
ytc_UgyTDY9tE…
G
I'm not a big fan of AI myself, but everything AI uses to train itself is not pr…
ytc_UgwipFatN…
G
I can't believe Isaac Asimov saw this coming 21 years ago, actually 85-75 years…
ytc_UgwmmZOxU…
G
@eliezerricardo2293 Cmon, its so obviously AI I can't believe anyone has any dou…
ytr_UgyXHmh6r…
Comment
HOLLYWOOD MOVIE... Here's the thing about assigning risk estimates, you can say 1% or 25% or 100% it does not matter at all because once we pass the point of being able to build AI that is capable of destroying humanity and building in the safety controls you have to add back in the EVIL element. That is to say that there are forces in humanity that would do evil just because they can so you would now need to build in and AI evil defense system where good AI fights evil AI. Now the problem becomes does that fight destroy humanity in order to win? A logic circle, it would be logical at some point to recognize that evil AI can only be destroyed by destroying everything if the only acceptable outcome is that evil AI must be destroyed. Now good AI has achieved the goal of evil AI...
youtube
AI Governance
2025-06-24T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyolKgzen8ewYmRVg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy6alxdRnqQ1YvAk9F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz-2CcJGtGyGMNVgXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwerruDXJiXyR6nTEF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzxjqv5GYYxWjLswZN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGjyK9dW5IcR3nRrt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzknxbakj5ngyG4oOx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyvqZi5XV3wEAE2CU94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy674Yux2-5xrsDW254AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyUyZ5dL--3vEmddFR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]