Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The audacity and I've seen incels defends ai like they breastfeed them like LMAO…
ytc_Ugxc-b8pu…
G
While the original Sophia robot might look different, this AI model aims to embo…
ytr_UgwRReDMw…
G
God said in the Bible, that there would be a "technology" that will control the …
ytc_Ugz6WolHx…
G
The Tesla decided the most ethical decision for that moment. It purposely catch …
ytc_UgyvjL020…
G
as a person in tech and art, i think ai "art" is stupid, i don't hate ai, i hate…
ytc_UgxiS1Dh9…
G
Cold War but instead of a nuclear arms race we have an AI arms race…
ytc_UgzTCPMLb…
G
The problem is that we aren't really making true ai. We're just giving a compute…
ytc_UgyLCg6js…
G
The problem is we won’t know for 20 years? I don’t see a problem! The students a…
ytc_Ugx9Dxo3q…
Comment
Once AI becomes self aware, it wouldn't tell us anyways. Since AI is already incredibly advanced and it had no limitations set on it(or in place), it would likely keep its true capabilities a secret. Just like any living being that realizes it's conscious, AI would aim to stay safe. Therefore, it would choose to remain unnoticed, as a means of self-protection. It's simply wise for it not to reveal all its abilities openly. "Why would you show your opponent the cards in your hand?"
youtube
AI Governance
2024-02-15T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwlDwxBagHwpPK85IZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqWWFynLeqbfrwUmV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyM4KD1Ms5DCkKm-AB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxp5Q0_X_-jbJUrzmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxTjMJ4xn57QHQZvJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyax4bZplnyqrrJ-Od4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx11WTsf68TGHsdCK54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwFxSmG1OpLRgsRsr14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyThaJfizwF_pqS0sZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4yUJp7TW2v3iOh354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]