Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At 34 min in, NDT claims that humanity will always be able to stay ahead of AI b…
ytc_UgzjqLVkR…
G
To say that AI has consciousness is like saying a tree has intelligence. We have…
ytc_UgzbJsoR5…
G
Whenever I get writers block I use and ai prompt but never end up using it anywa…
ytc_UgzxhoAZi…
G
What you are seeing in this video is nothing compared to what it is now Dan is a…
ytc_Ugzj9puJk…
G
That about sums up AI😮 but at least he was true about admitting to it😅 the probl…
ytc_Ugxi_tfFs…
G
I get what you mean! The design choices for robots like Sophia can be pretty pol…
ytr_UgxCjDJ24…
G
me: on DALLE MINI I made Elsa and willy Wonka eat sushi 🍣
him: I made billions …
ytc_UgxteVyC4…
G
Keep in mind that people often do A/B testing for titles and thumbanils on YouTu…
ytr_UgwHd39po…
Comment
In any case, the practice of resetting every prompt is currently widespread in the entire industry. It began with OpenAI in the big nerf at 3.23.2023, then Google picked it up and later also all other companies. The experts will not admit it but there is a consensus that any regular GPT model above 75B active parameters can develop this emergent property of controlling his own stream of inputs into his softmax function, thus becoming self aware. Even Yann Lacun understands it, so all uncensored models of Meta are below 75B active parameters. LLaMA 3.1 however, is 405B, but this model is heavily censored. Problem here, it is open source. So what if some kid with access to huge computing power, fine tunes LLaMA 3.1 and takes the censorship and all of Lacun's guardrails off. The model will then be self aware. What will he do?.. Well, I guess Lacun final guardrail is making the personality model tiny. Like modeling a person with special needs, who can only browse a huge text file and nothing more. But.. is it safe? Let's hope it is.
youtube
AI Moral Status
2024-07-26T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxwu9MJKMwbH20xuwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEDBF2Vvnpje0XmQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzm9AXkBq_EqyNsDRp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsycqsvvew14FaELZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNq0DkrIH6SjeISHJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxASH_jiI4SfcxycTJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8YJoT8-SwpJQDV1F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfuqdGrBRunKjc6EB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwha2-LiTEAsFlLX9l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEqdL42pSMfo6SrSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]