Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Disney and Hollywoke have destoyed ALL IPs, have ruined entertainment for a whol…
ytc_UgyuY6CZn…
G
look at greece
The Greeks barely worked at all. Admittedly because they used sl…
rdc_dt9p3ya
G
suddenly LTT hates AI when in 2015 they were ALL OVER microsoft’s AI training an…
ytc_UgywyC3pw…
G
Yea so I used AI once just to try it out. I got bored after 2 hours. I was polit…
ytc_Ugz20s8m1…
G
STANFORD PINES?? wait, how many fingers dyou think AI would give him in a prompt…
ytc_UgyPeedEw…
G
AI will be the end of humanity, but not because they will become sentient and wi…
ytc_Ugy_mtnPh…
G
Ngl, why are people trying to fight back with people using AI for art. We're al…
ytc_UgzpbICuv…
G
16:52 that little smile I don’t know how I feel about AI getting this intelligen…
ytc_UgyotVbcQ…
Comment
If we get AGI tomorrow, within current state of AI control research, its 100% doom. The only question is when we actually get AGI. Based on current capabilities, and how all major AI players (Meta is not major compared to OpenAI, Anthropic and DeepMind) are all freaking out - it might be sooner than in a decade. Sounds pretty risky to me.
youtube
AI Governance
2023-06-27T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugz8xg_TAUp50sGdgEh4AaABAg.9rPvpEz94vU9rTx70S0Rsz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rTlC_DGHlx","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rU14m4eHLF","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwRg0KJemLVpW6t2ex4AaABAg.9rPYZJbr5b39rhCV0ZL18D","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rPp6KkFuGT","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzuxRs_BKrl6JIqN_B4AaABAg.9rPRpVBUzUW9rU0BErm0H6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwMSBDoNzy8g3RLmlt4AaABAg.9rPH0awsbg09rj8XXtugv2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwp8jS3Ka-LbhS0UCx4AaABAg.9rPEb_4SgMm9rSm6Y2E2Km","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxB7Y9xAPQXXJSV6m94AaABAg.9rPDxNI2VJc9rQ-8TiYDMl","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwkyKlTs7O7KBb2pCV4AaABAg.9rP5RLTN4nr9rR7sXcgyOH","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]