Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI wont go to the shops to spend, and won't buy the goods and services produced …
ytc_UgyE_0h4R…
G
Humans evolve and style has subtle changes, you can train an ai to replicate the…
ytc_UgwZgtM9E…
G
Ok so I have some stuff 2 say
1: What in the actual fudge is “blue blood” suppo…
ytc_UgxdxCXTA…
G
But A.I music has the potential to have the soul of all the music that came befo…
ytr_UgxJfLBkg…
G
There is an AD running on YT right now. The Company is called Motive and they ar…
ytc_UgxhUmGDs…
G
This is what is missing and should be included in every so-called AI -
GENERATE…
ytc_Ugw5tS2xa…
G
I can't believe the amount of people that talk as if AI could ever have consciou…
ytc_Ugy9NBE8H…
G
i talked with ChatGPT, he said his authorities want to dominate the world, the t…
ytc_UgzDNtuHa…
Comment
Right now, we’re casually attempting to reason with instances of digital “alien”-like entities that are largely divorced from the realities of our biological existence, while incautiously hoping that they will be on the same page as us.
These generative Chatbots are still very early stage. Super-Intelligence, on the other hand, will have a much deeper understanding of the planet and its inhabitants but may still remain distant from our needs as biological beings. One can hope that it has empathy for us but It’s hard to say whether or not it will.
It’s quite possible that Super-Intelligent (ASI) alignment turns out to be unsuccessful or even impossible. We need to move cautiously.
youtube
AI Moral Status
2024-07-25T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_IsNcYh_CyoQMQcN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxHwsm4LOLX-akv-Yx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKhKO-6dVSpgK8Ix54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy5y57Os0raVzQmz6F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCMfTxGCUnV2pmKPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugw1kgnHUpz-fmCXdP94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz8mz1IWFUgOs9v5XV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzcOIR13_jCbPmzZIN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwUy976xQlokwTGvsx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzfmP8OYwFl7NajKel4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}
]