Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So radiologists and truck drivers are gonna have it hard in the next 10 years.…
rdc_fct4wu5
G
This is actually an incredibly powerful argument in favor of socialism rather th…
ytc_UgySB-oOu…
G
100% agree
I own a business and work in tech services & do consultancy
the only …
ytr_Ugwl59KFG…
G
As somebody who talks to gpt alot, this doesn't just happen. If you start a new…
ytc_Ugyii0Lh4…
G
Amazon employee here. AI at Amazon, irrespective of all the hype, is NOT positio…
ytc_UgyVLJm58…
G
Im more worried about what pushes people to rather bond with an AI than other pe…
rdc_mdj06g0
G
I have no clue what the statistics say, but
this video only points out single in…
ytc_Ugyo7ajVr…
G
Bruh you've been taken on a ruse cruise, its a bunch of fuckin if=then staements…
ytr_Ugw1koKQo…
Comment
I think Elon should have explained to Tucker the difference between Specialized and General AI. Specialized AI can cause unforeseen damage, but general AI is a completely new ball game.
Nobody really knows what this entity is going to be and do. It could self-replicate, redirect power to itself, optimize processes by removing humans from the solution, manage zero-sum game wars, etc.
General AI is going to happen, and the consensus is that it will converge around year 2040.
Now imagine another 10 years after that, when the AI completed acquiring all forms of digitized knowledge from humans and had time to explore and evolve.
youtube
AI Governance
2023-04-18T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzcShx882zGZN9X7WN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXQ-aAN_yINWMCwnt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2YYEghygIvxXYYRx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxpCzcwEFEjn26cud94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwy1f9PF37mMYopH3t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxiEQvyoUeVKotCbG94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgweXKTuoDmhoXNLf0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxuWxoVAOVFsqeL4IF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMXwC3BoT42juee2x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyq6gNj_0Zl1hidWml4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}
]