Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He and I definitely agreed on something; which was we should’ve unified globally…
ytr_UgzqLRsh6…
G
Hey, the wealthy, who paid for the AI, needed to make a buck you know. So, rob…
ytc_UgzBVpczl…
G
The only response is to reject AI entirely before we get to the Butlerian Jihad,…
ytc_UgwAuySID…
G
Imagine asking Humans these questions. People need to treat AI models that have …
ytc_Ugx04KjG2…
G
Here comes the BLACK people in the comments who claim to be the original Indigen…
ytc_Ugxoq8wNZ…
G
AI Art sucks, you don’t even make it, the whole purpose of art is putting your i…
ytc_Ugxb1cA-U…
G
"Ai accidentally proved the existence of soul by showing what art looks like wit…
ytc_UgwsK9fAD…
G
Listening to Dr. Roman Yampolskiy’s words, I notice how quickly the AI narrative…
ytr_UgzfImFYB…
Comment
Look up a series named: "SITUATIONAL AWARENESS - The Decade Ahead" written by Leopold Aschenbrenner in 2024, a former AI researcher and investor for OpenAI. The first article in the series, "From GPT-4 to AGI: Counting the OOMs", simply looks the rate of effective compute improvement up to GPT-4 and then extrapolates that out to 2029. If the upward improvement trends hold, by 2027 to 2029, we will have AI models 1 to 10 million times better than GPT-4. This would put the AIs at the research scientist/engineer level.
youtube
AI Jobs
2025-07-25T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyycPq8ZhxW7FLeW6h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTTDye-_zCjbEA-UF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZMRI1nVJuafWT_XF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxb9NISYqBQXo0WSl14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwpUW3nKAzx0yyMz94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzk6c8mk6ocdkADZgN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxnZqWYg5a6967-vjd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFsAD9xDQNwXhdKsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzurMw-5fVcZFWiTfB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylDmAxLafzpd2EvLx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]