Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my (limited) experience, A.I. very often gives incorrect answers. Yet, people…
ytc_Ugx9FtfdL…
G
This is possibly the best podcast episode I have ever seen, wrapped in the presu…
ytc_UgwDP78pQ…
G
Here’s a fun fact.
In all Ghibli movies, they use the traditional method of ani…
ytc_UgyPpdFGJ…
G
Using ai for art is a disgrace to life and the artist and we should not let ai t…
ytc_UgwXeAOL3…
G
I've noticed using chatgpt recently will flat out lie about things just unwillin…
ytc_UgwhceQGT…
G
You'll also get better diagnoses from AI. Talk to chatGPT about your symptoms. Y…
ytc_UgzX7IRVN…
G
When approaching a scene with emergency or police vehicles on the side of the ro…
ytc_Ugz6r5vUE…
G
i dont wanna sound too edgy but this is exactly how a distopian world starts lik…
ytc_UgzyQJ6L0…
Comment
The main concern with bioterrorism when it comes to LLMs is their ability to assist a moderately technical person in synthesizing a pandemic-grade, lethal pathogen with lab equipment that can be bought on the cheap. This is something that machine learning algorithms are well suited to doing, as evidenced by the success of programs like like AlphaFold in predicting protein structures from amino acid sequences. With the right hardware and a localized setup, a malicious actor could theoretically get a jailbroken frontier model to provide step-by-step instructions on how to synthesize a biological agent today. And, unlike with nuclear weapons, the associated costs and technical barriers will only continue to go down with time, even if we do somehow manage to regulate compute like we did fissile material.
reddit
AI Governance
1770928048.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o50nb5q","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_o51l7fw","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_oa4057u","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_oabz523","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oa0gx99","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]