Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are also liars. This is just a reflection of us. Seek the truth despite a…
ytc_UgyyELxwU…
G
Iris recognition is ridiculously accurate, it's been used for a while as a natio…
ytr_UgySId0lK…
G
If AI is so smart why hasn't it solved the Fusion Power problem? Are these so ca…
ytc_UgxYvljPp…
G
To correct what you said about your analogy regarding personalised filter bubble…
ytc_UgySO8yRV…
G
Those jobs being automated IS a problem if they're the ones critical to the rest…
ytr_UgxC38DGK…
G
@zilvart238tbf, the data collection of that is different. Tesla only was counti…
ytr_UgwBTwCPU…
G
Why not just use AI to help put people to work? Not make work easier.…
ytc_UgymGmj6p…
G
1:58 wait this sounds fam- no is it the story about the ai who told someone to r…
ytc_UgxaokaRZ…
Comment
Metaculus, a reputation-based prediction platform, has community forecasts indicating a 50% chance of AGI being publicly announced by around 2040, with some estimates as early as 2027 for weaker forms of general AI. These timelines have shortened significantly since 2022 due to rapid advancements in large language models.
-Grok
youtube
AI Governance
2025-09-04T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxCDFKYmZT55-82t4R4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwRUQDZcwoRtrM7lXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-HhzWd4Ajl6Kvz8F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQJETYwN9wi2745Ap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyPgjXMnj4RtCJOiRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxfa4zIvTVhQqdHhnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzdqQlscTWF79qfG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyvSHG_LrcWspcFxiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzA9bNThU7j8w9wbed4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPsTICMLA_j8JPfEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]