Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tyson's brief aside about "guardrails % checkpoints" is doing an awful lot of he…
ytc_UgwgYTpfr…
G
here's the thing one way or another we will wipe ourselves out be it pollution, …
ytc_UgxuFueYL…
G
Pretty fucking bleak that you can still detect that ChatGPT way of speaking even…
rdc_nnjmb3u
G
I kinda figured that robot slam wasn’t survivable! I am curious how the veggies …
ytc_UgyrKiadW…
G
The Elon Musks and Sam Altmans of this world see themselves as gods, not bound b…
ytc_UgxIxHtFc…
G
As an AI Enthusiast, I appreciate Charlie bringing up how the tech should be ok …
ytc_Ugz_9KmSr…
G
Well... I used to be interviewing software developers to hire them for software …
ytc_UgyqYHByA…
G
It's so sad that true artists of all disciplines will be left in the wake of spe…
ytc_UgxMnVaTA…
Comment
2:10:50 ish this is not gonna get past any ethics review board, but one way to get safe AGI would be to localise it in a biological substrate. It would solve the contain issue. Super intellect brain in a jar with magnitudes higher IQ who could help us solve fundamental mathematical and physics problems. Though I suppose this is in effect a biological iteration of narrow AI if for no other reason than the biological limitations of such a super intellect in how it can manifest its AGI. Still, it would be a lot safer if incredibly morally dubious, but we could use such a super intellect as a launchpad for a safe AGI on account of if we cannot conceive of a way to make AGI safe, perhaps a lab-grown brain with some xxxx IQ intellect could conceive of a safe way to develop AGI.
youtube
2024-07-06T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzhACq6DEYufyNfLKt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwkrip8JRd6eOssIt14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZq9eumJCEcYP6Zet4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxxQSFa7BM9aCz7Ach4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwngLQHf2pAOx-l8l54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwyGaCd7URee5pIY8N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpjHcWlIN2wvHgYaF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxccOYzbNx7_03SWjh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy5fICEj6a2Eg9J-3d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy3m41-I2NaoaECF_V4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]