Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A United Nations report (15 April 2024) began... UN experts today deplored the p…
ytc_Ugy5zwYTP…
G
I think its pretty fascinating that we can have a conversation like that with an…
ytc_UgxPYowre…
G
Imagine a robot in your doorstep coming to arrest you for not wearing a mask. Wh…
ytc_UgzIkoyhS…
G
In the future no one has to work because of AI and robotics. Sadly we're going t…
ytr_UgzK9NLbU…
G
I'm not sure this is real. I would expect the machine gun to cause some recoil e…
ytc_Ugw3AEZAg…
G
This bit around 11:00 is a pretty good summary all by itself.
> When you use AI…
ytc_Ugw0llX02…
G
Dude sounds like a moody teenager with his forced vocal fry. He’s an evil sack o…
ytc_UgyJpsrzS…
G
I’ve been leveraging AI for my startup, and I can’t stress enough how Rumora’s a…
ytc_Ugz-PIXoz…
Comment
What we call "AI" today is basically just large statistical models that predict the next word based on context. No secret plans, no consciousness, no "Machiavellian" plotting. It’s kind of like saying a calculator is planning a revolution just because it can multiply numbers. LLMs are powerful at generating text, but it’s still just… predicted text, not intention.
There are no credible reports of an LLM actually blackmailing anyone. no autonomous AI blackmail reports exist. What circulates online is usually media hype, misunderstanding, or confusion with actual malware written by humans.
youtube
AI Harm Incident
2025-09-01T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzTlHVp6Q1BsgGRy-B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmLWg9YPbGOO7Gh7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4RLdbZZZvm8RFfvN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQeA4pGo_PtPElS-V4AaABAg","responsibility":"intellectual","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwaVPCGlxZnuvwdE6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz4X-1N4XIk-JYCSQ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCYafao9N1i7qyhQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugz92bairmfuiRE9NZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz31T1cUq1ePVO9Avh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwcY8__jhFEoOW1x9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]