Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a professor in astronomy once said: " We have been looking for alien contact …
ytc_UgwReiqbS…
G
You can have the ai but if you come after kink don't forget we know our way arou…
ytc_UgxWcmKwp…
G
@dominicfernandes3953 so the doctors/nurses also work for free? The ones who cr…
ytr_UgzpiYq7E…
G
Trucker here. They've been boasting for a long time that our jobs are going to…
ytc_UgylJK46a…
G
Thanks for making more training data. **inserts photos into a.i and it spits out…
ytc_Ugw-yy7aG…
G
I made my own experiment augmenting sentience on an LLM and it was a damn roller…
ytc_Ugz1cVNzD…
G
My issue with attacking AI is that I consider it in the same category as collage…
ytc_UgzqLOHBi…
G
Wasn't he the same senator that introduced the bill to bad states from making la…
rdc_oi2f5xm
Comment
There is a chance that Sam Altman of OpenAI as well as whoever is behind Anthropic realize this as well, and have good ulterior intentions and are trying to be the leaders in AI research to realize it. Unfortunately especially with OpenAI this is likely not the case, but I hold out hope that should they win the AI race it would at least be result in a better world than someone like Google/Meta or a Chinese firm winning.
Ultimately, for Google/Meta/Microsoft we can make a very confident guess that their AI efforts are just for profit, for OpenAI it's less certain - but even 90% profit and 10% for the good of humanity would result in a huge improvement in practice due to how marginal economic curves work (to extract that last 10% of extra profit you either miss out on disproportionate good or do disproportionate harm, as you have to do net negative things like price discrimination and anti-competitive practices to milk every drop of money out of your customers).
youtube
AI Governance
2024-03-17T00:0…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugxhm-p7rDbeJGonM3h4AaABAg.A13G5Lpm0ADA13gfmWd8My","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxhm-p7rDbeJGonM3h4AaABAg.A13G5Lpm0ADA14vzCfaopw","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugxhm-p7rDbeJGonM3h4AaABAg.A13G5Lpm0ADA14z_wRoAj9","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugxi5ce8EMCD9FE3_yR4AaABAg.A13Ez6-PLyqA13WOLhlLWe","responsibility":"none","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugxi5ce8EMCD9FE3_yR4AaABAg.A13Ez6-PLyqA17KyQnbW2P","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugwf45m2Q90YUSuxErJ4AaABAg.A13EcDzGajyA16vt7XpPEt","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyHe-33IprUCbxgqwN4AaABAg.A13E1yxI70dA1Gd_bBpr4m","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzoEE0Nmcw2-gyYSzV4AaABAg.A136JkmRALMA16Q1pb8MRZ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw1xsTprIl6aeaT5ft4AaABAg.A135mYaeu95A13ZLuH9Mob","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw1xsTprIl6aeaT5ft4AaABAg.A135mYaeu95A13bAY3-G1c","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]