Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm doing very OK with ChatGPT. I don't let it decide architecture. I lay out my…
ytc_UgzoXZJ3m…
G
Is there any good cheaper alternative other than Claude? I look at Claude's pric…
rdc_o8doyjs
G
need to stop sending kids to school,the less brains the better our planet will b…
ytc_UgzPFY3oV…
G
I save and keep the prompt I use in the Sora app to. Thanks
This voice voice cl…
ytc_UgzMYtgua…
G
AI ,,MUST HAVE GREEN ENERGY TO OPERATE. there is no possible way to supply it EN…
ytc_UgzJk250U…
G
I hate when AI companies use the word "safety" but what they really mean is "cen…
ytc_UgxvTPF4a…
G
Make AI "Art"
Waits for Twitter artist to make it art
You got a free commission …
ytc_UgwV3jYWu…
G
We can make incredibly powerful computers, and have a much more capable ChatGPT …
ytr_UgwlMF4WH…
Comment
If the majority of the people in the know of developing a technology all agree that there is at least a 10% chance of extinction then there is a huge problem. Seems like there's key percentages here, the 1 in 10 chance(10%), the 1 in 4 chance(25%) and the 1 in 2 chance(50%) chance this will lead to the extinction of humanity. No matter how I look at those odds, all I keep coming back to is that we are so screwed. If these percentages pertain to one A.I. system and there are a half dozen well known high functioning A.I.'s how much are these percentages compounded is 10% really 20%? Worse? Has no one gone to a casino before, the house always wins and it's looking less and less like humanity is the house.
youtube
AI Harm Incident
2025-07-29T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy6Wstd_6Y9SS78h1t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx15K1cZowNuIyjfiR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOZ8-di15Nhx3Zkk54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7bdQaU177bWxdpB14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwlx12ure6Aq6lXXT94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBl2t2haYv8AEYoct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQP1kaz1d8fTVVAal4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwuKG5OyDpCKFQWsxB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw1r6Isf8897AJwM654AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBC5Qstgo3iB3dg7p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]