Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine control a humanoid we sent to Mars from the Earth with direct control fr…
ytc_UgwyeIarw…
G
Honestly you are too adorable of AI, it may be smarter but will it be able to cr…
ytc_UgxS23xgf…
G
Hi Mathan, you got the right answer. Kudos.
The contest is over and winners have…
ytr_UgxrPmGdl…
G
Out of curiosity, why not have AI happen that way these big companies can pay th…
ytc_UgxlvzVj7…
G
It is called artificial intelligence, not comparable human intelligence. A two …
ytc_UgwXtI5DT…
G
Great. So give artists a little bit of money so we can train AI to replace them.…
ytc_Ugx75Os3F…
G
4:40 Why would one put a sentient AI in labor when you can easily just have a le…
ytc_UghVOYyM5…
G
Not to mention how we've stigmatized mental health/illness issues in our society…
ytr_Ugyhzf6G0…
Comment
Also, please find someone knowledgeable about China to interview. The assumptions that go on about China always amaze me. Do we know that China wants to develop harmful AI? How do we know that such claims are not just propaganda? Find a scholar of Chinese foreign policy / science policy and let’s get a real analysis of that. Looking at the USA and China today, which appears to be the better-governed country? Hmmmm.
youtube
AI Governance
2025-07-01T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw4xWfhCXzV6k4ulJl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw-bqH-eCsR7d8eRWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYR40-oKYIunEg-tF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwy8utEb7VRDkHF_RJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkEV46ustYqRTf-tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiakJGAuZfjhnIkNl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyORtlXmUAdOtb44k94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwG_AvZKlovXMUDACx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDGnzL3OcvnFSb-kN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwnK1SJpDoKwiCnQAZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]