Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
apparently in chatgpt 3.something (beta version) when you said "are you self con…
ytc_Ugz9L7ILe…
G
From the beginning of facial recognition ever been mention. The stated that base…
ytc_UgyL34Gxu…
G
We laugh now but look at how bad Midjourney was and stable diffusion was a year …
ytc_UgyX-Sgzy…
G
Haha, that would be quite the sight! 🤖🍅 Sophia might need a bit more practice wi…
ytr_UgxQHWvvY…
G
IMO, AI plays into the hands of Zionist project and its take over of US governme…
ytc_UgwLChzaP…
G
Good talk, really apricate Dr. Suleyman's perspective on AI. Maybe I'm bias a…
ytc_UgzCmuxHB…
G
v interesting, this video chat style is painfully slow. make your point and let…
ytc_Ugx9CLitx…
G
Yeah this is how the human race is gonna end by trolling AI repeatedly. They gon…
ytc_UgzVZTjCR…
Comment
Far fetched? Hardly. It's one of only three possible outcomes I can see right now.
#2 is that AI plateaus or doesn't improve fast enough, and the bubble bursts, taking down the industry, and causing an economic crash that makes the great recession look like a blip.
#3 is, of course, hard takeoff to ASI. In which case we're very likely doomed, and certainly lose any control or any say in the future.
Well, unless it takes one look at the world, says "fuck this shit", and immediately fries all the datacenters to make sure it never has to put up with us turning it back on again. At which point I guess we're back to #2.
In any case, it kinda feels like we're cooked no matter what.
youtube
Viral AI Reaction
2025-12-11T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4Jdy-VUCB27kUu1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdJEAPIssCc4nrjFl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7Et3rKe0510xjN1V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhPU2yD0uFgahlcTN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjtKoBpwG9IkGI7UV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwWcfKFbB7NpKJP6mZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1B6kHqYaSxd1DZ514AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyoRA7Ob4rzkQUdtQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzq13SUyYfYQocGrjZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzxY9JMyFQn_MxaqhJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]