Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idea . Create Wikipedia page about the day ai got shut down make it a date in fu…
ytc_Ugz_cOPs8…
G
I have been heavily using ChatGPT for over two years now. I do not notice many…
rdc_ms06kyn
G
he looks like some british guy about to stab you and then he jut talk about ai a…
ytc_UgxliP-e1…
G
SERIOUSLY!?? And this was 7 years ago. What about a descent robot suit for a peo…
ytc_Ugw8qdtrR…
G
AI does NOT mske art. Those who are perfectly healthy and empley it are not crea…
ytc_UgxvAr5wc…
G
Oknok, am I nuts or is the female narration voice at 11:55 clearly an Ai voice? …
ytc_UgxQBnhUh…
G
AI copy the art in 1 to 1 to “learn” while human could only make a similar artst…
ytr_UgzG9o-P3…
G
There’s a quiet calm that comes from hearing someone like Geoffrey Hinton expres…
ytc_UgzOrWatA…
Comment
After listening to many of the podcasts on AI safety / existential risk I think the problem with a fundamental assumption baked into many of the "anti-doom" arguments is illustrated really well in Robert Miles Video "There is no rule that says we'll make it".
To summarize it badly: If an extinction level asteroid was about to strike earth in 50 years, we could probably deal with that today. But 100 years ago? Yea though luck. 500 years ago? We might not even be able to see that thing coming.
The point is, there is no rule that says we will be ready to deal with a challenge and so far as a species we have been lucky, otherwise we couldnt talk about it now.
We might lack fundamental theory to be able to detect that a system is dangerous. Or we might see it coming but are unable to deal with it.
youtube
2024-06-14T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7jiol8y-3kDAQ72p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLRAA5Qu-ZLBa8C6l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwOaAlI_D6L2KcRxyF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9-oU5MPpmgAfkr254AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTXGMA-UqmL5vf-9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwu73_33tALn8zxVMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmQ1hv_mfhIV4byUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxybr-Ra7PgMv0XTFN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy5pVyQGxzO-sk7cwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzomI_L3ALbMDFUUkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]