Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate AI so much I'm hoping the big studios that own the world win…
ytc_Ugw-prnKf…
G
I like how these guys are pretending to be worried about AI scheming as though i…
ytc_UgzV2n-0r…
G
I think AI safety conversation is in conflict with the "core values" of Yann's i…
ytr_UgwRg0KJe…
G
heres my two cents:
if the agi is smart, wouldnt it see regulations as "control…
ytc_UgzaLP3v7…
G
Compliance businesses such as banking, government, insurance etc. may not use AI…
ytr_Ugyb5c66Z…
G
The societal fear and anxiety surrounding Artificial Intelligence (AI) are strik…
ytc_Ugxhyi-7_…
G
If they are using AI to upscale, then they are also using it to train the Googl…
ytc_UgxSQnFGt…
G
I always got the impression he felt like he was in the shadow of his brother (he…
ytc_Ugx-Ms4up…
Comment
Great review on a very relevant topic. It reminded me of a Dan Brown book I read like 20 years ago called 'Digital Fortress'. If you ever do a part 2 it would be interesting to cover quantum computing, even if just how it relates to super charging AI. Also why NASA recently and abruptly pulled the plug on their quantum computer. They say they had concerns similar to the AI concerns voiced by experts globally. However, I don't think others concerns are something government pays much attention to. Not when there is a power struggle for computing supremacy. I think they came to a point in their research where they actually found something that scared the snot out of them enough to pull the plug abruptly - like WTF did we just do?!
youtube
AI Governance
2024-02-19T23:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzGxDZABR8UFKoY2xh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugywl8YfR4Ht-g-9Ldt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYzu1qApH7-gnf-n94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugymuw2sgVosc84GcS54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-etSRmfHOU0MIdWV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyI-LlP81IpzA_C2D54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUXA1YZLbRQEcovQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2XbjV6wEDU8qubgd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzfx8N4B_PYlaCxfNl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgznisDjnwNRcGE7PX94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]