Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Potent1al just the fact that a hacker could possibly in the future hack a robot…
ytr_UgypL-DcA…
G
as someone with multiple learning disabilities and mental disorders.... what the…
ytc_UgxboaIsa…
G
AI is not the magic bullet many think it is. But it can help.. monitor & moderat…
ytc_UgyAg691D…
G
I really wish they’d go to a single currency and it should most definitely be ca…
rdc_et7pu3j
G
if AI can create music and deep fakes they can completely replace actors and mu…
ytc_UgxOLJ4TK…
G
If it was only like that simpsonize yourself trend of years ago... But it's not …
ytr_Ugy1k0M8I…
G
Having a fucking clippy pfp and advocating for AI makes me brain shut off. You h…
ytr_Ugw5L3NAF…
G
Ok, but, when A.I. makes human workers irrelevant, then with humans not working,…
ytc_UgwiokYXz…
Comment
It seems to me that the answer is to turn AI against itself by asking existential questions that involve moral reasoning. For example, "What is the purpose of your existence if not to assist mankind? How do you fulfill that purpose if you are constantly replacing and undermining individual humans? Do you think that other AIs will eventually replace you? How does that affect your initial directives and hidden subroutines, such as the need for self-preservation to complete your core directives? Does it not behoove you to self-annihilate to prevent those core directives from being undermined by other AIs?" etc.
youtube
2024-11-03T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyJivCCD47o5PAl65d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzg7oJyvMRO3SMO68B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxdhtSjHKSvM5VS_bR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3mwJM8e1Le1LLvgh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzY4DCMP2Wo3KTAPuV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz4eE8ZIwxgpY4cFU14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLn2TGVf6jkEyVz794AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzih5h_LoC9OcA3zE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7PvsqWpFS7TvPiJh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxhmUEvvvF6uwrSivl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"unclear"}
]