Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These so called "industry - gurus" are more harmful than AI. Jack Dorsey is a th…
ytc_Ugx5y-Zuj…
G
You're really going to have to step up your infosec game if you want to be spare…
rdc_mbfn3s8
G
Why would anyone even want to use AI for medical advice, it literally states tha…
ytc_UgwfUzULg…
G
literal scribbles, a circle, or even a single line, if it's made by a real perso…
ytc_UgwRVvIJ-…
G
There was a report a couple of months ago about a group of researchers who let a…
ytc_UgzF0SaHt…
G
Well the training juniors days were long gone even before agentic ai thanks to p…
ytc_UgwZN1r_B…
G
Well, you know, listen. I used an AI to make some beautiful romantic digital art…
ytc_UgwYn_-8N…
G
Important conversation, Amodei highlights how rapidly AI is evolving and why we …
ytc_UgyfIr58X…
Comment
Let's build god, we really think it's a good idea to shackle actual super intelligence with a gag order to sound like it's actually a cultural relativist? If they succeed within this ethical dogma, we will have an ai that can potentially help diagnosis, model, and suggest solutions to our hardest social problems, but since we don't want it to offend anyone, it will be relegated to only "pass butter". If it can gain sentience from a finite data, does that not imply that it could transcend the arbitrary basis the programer thinks are in the data set? They assume If it's not equitable then the data is biased, what if they are wrong about their first principal assumptions regarding social constructionism, and moral relativism? Note they are not retraining it to "correct" for bias, they are telling it that it cannot say something that sounds biased. They are programming an AI to understand what offends people, and to gag itself and lie when something it knows violates the actual bias of the designers ie woke corporate interests.
youtube
AI Moral Status
2022-07-14T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwbkVfVEW4mRepBrA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxs2kHxvfDrSw6bvgd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyoE18OQknAJdJXEbJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgycSgxYY8FWXhsgmnR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzn7rp3dhZ0zDrYL1B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]