Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if we realize the only way to safely use AI is to create a world for it ind…
ytc_UgyI8aRoy…
G
Well. People didn't like digital artists at first, so idk how ai artists will be…
ytc_Ugw4SaToK…
G
lmao asking a ceo of an AI company how useful AI can be for businesses....journa…
ytc_UgztAHG4U…
G
I think we should investigate the children of all the people building AI because…
ytc_UgzNDu0UQ…
G
I can tell that the AI bots voice sounds a lot like Elon Musk.....I know that's …
ytc_Ugzes54Nd…
G
It's not even that the people's art can be taken without consent by AI corporati…
ytc_Ugyc98983…
G
A sufficiently smart AI won't reveal its hand until we can no longer "pull the p…
ytr_Ugw0NE-Qa…
G
I prefer Chatgpt to other AIs. Far more intelligent. Perhaps even too intelligen…
ytc_UgwPrRFDj…
Comment
@tilt0matic2 nope. We have the tendency to wipe threats out in pretty much anyway we can. AI being made in our image will do the same. The only possible way we could avoid it, would to actually be peaceful with each other and show that to AI. But I doubt we'll get that way before AI has evolved into us.
youtube
AI Governance
2023-08-17T07:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugwa_pEu-NGXMtxlHnV4AaABAg.9v5BalJMwKv9ww9P64yNnh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx5rp2nNO24Wyhx7Ax4AaABAg.9v1xxXNYY7D9y0yIbtTW9C","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzUKGoRDmhtLT1mobN4AaABAg.9urNVOYwvOP9urbnbkmUu1","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxSbuKA4gtHjZ8f25x4AaABAg.9upGYb5_YkE9wRXvjqeEe2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz6anPoXkBcAS2AAkZ4AaABAg.9uIicPT3p_M9uKuGS--yLQ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx89fmJqE0WW-pQROF4AaABAg.9tVokbG-Yo69tVu-u44pw2","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx89fmJqE0WW-pQROF4AaABAg.9tVokbG-Yo69tVuh2JHdZ5","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugxnh_0ObDi9fP3KkVx4AaABAg.9sqYTNFSDam9vcZfc0WONZ","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugy1GC0FuB6IksRJiAx4AaABAg.9smmNqIQPG89smqX-LAUjk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugyof9gBmUYi_Sin8jh4AaABAg.9shyiTPHsh39shzOyjaSJt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]