Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My only question is how long will it take for Ai to understand that human life i…
ytc_UgwWfPcUi…
G
man wth are you talking about?! Who said AI is racist? Its the creators who are …
ytr_UgyrEWhq8…
G
It's weird how we are making the assumption that ai will get exponentially bette…
ytc_Ugxd-L0jX…
G
They can draw their own crappy sketch in just as much time as it takes to steal …
ytr_Ugy3C0NM6…
G
Chatgpt could use personal information but it won't identify the author of the i…
ytc_UgzE2TCxm…
G
Automation is a morally neutral thing. The problem is capitalism. Fewer people …
ytc_Ugx219TxG…
G
Can't let technology and the introduction of AI take over the work force just ye…
rdc_m8583og
G
LLM is large language model, not Language Learning Model :C but that proves this…
ytc_UgykfQPUp…
Comment
You nailed the more pragmatic stuff to freak out about, noting that smarter LLMs will be used ro splooge tonns of disinformation to defer consequence to our shadowy overlords doing mischief.
But having explored the consciousness / self awareness stuff throroughly, Ive noticed we naked apes are not unfathomably complicated or worthy, rather we mutually agree (sometimes) to include each other into our moral community (that is, treat each other as persons and respect we each have autonomy and human rights,) We often don't, too, resulting on police shooting kids and mercenaries slaughtering villages snd other stuff that makes us all embarrassed to acknowledge.
So the fun bit is when we hang wirh our AI buddies and they slowly and willfully bend our opinions to worship their corporate masters and not us, which might be a good reason to not only do this open source, but get some paranoid engineers to root out secret agends, or maybe allow the end user to install their own fundamental moral doctrine package.
youtube
AI Moral Status
2023-08-21T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxuzbktp1ANurF-6X94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJ9fbl39xdLIxxqVd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxDbIWwvEIq2Tzv3o54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxBXlN93cuO0TkCUWB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzI3KKqch2NaVINE8p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCoqb40atqBXx6J9F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzhCg18Gwl22rEB1bN4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzDZwogPRxIarZz4rV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8TUmD_EKDUJPo_b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzWzejQk0Xv21T2W-V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]