Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just asked ChatGPT a would you rather. Imagine a city of 1 million people. Wou…
ytc_UgwdP5Rr4…
G
AI doesn't have a soul. It's just code. The really scary question is, can it bec…
ytc_UgzUJP2wf…
G
Theory:
AI WILL kill us; we are irrelevant. Before it does; it creates a simul…
ytc_Ugz68fyTe…
G
@onyxandtheoccult I'm gonna have to disagree. If you looks at famous art pieces …
ytr_UgyEequAk…
G
Certainly! Here's a comprehensive roadmap for copyrighting precise prompts used …
ytc_UgyOxpCDp…
G
I am not American so I am curious how many ai centers are around the world…
ytc_UgxHgf5uO…
G
This is why I feel like video games is the best use of ai art,because art is jus…
ytc_Ugxf9vcpd…
G
I feel bad for Siri and other chatbots because they are being called primitive a…
ytc_UgzzCXZd0…
Comment
I read the METR source around the 1 minute mark that claims to show the maximum task length AI agents can do now. The paper basically states this is based off a newly proposed metric that only requires an AI to have a 50% success rate.
Tell me, would it be wise for the management behind online banking, government security, etc. to replace reliably successful employees with a black box that is only 50% correct?
youtube
Viral AI Reaction
2026-01-11T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyTJEJ65W16zS0FAlx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwY47nhNsz9kB2HUHB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwy955sTdjM6kfXlyh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxbFfBkE5seRtJONAd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgySS7Kvy4tafHr2hox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyEvj6Ltk-cgyR2oR54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkFoz60iPQ7YBhNlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwj673iliKU-NrBy714AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8TYDSvHW-pvDK9-14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwQYjKDwAyphPdPPfN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]