Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah, AI is bad but other uses also exist like it turns out AI can be used for s…
ytc_Ugxy8mNNC…
G
If automation takes peoples means of making a living, then guaranteed income can…
ytc_UgzsYtIYS…
G
the goal of AI is to 'think' like a human. in my experience, they don't 'think.'…
ytc_Ugx-boj4z…
G
By win you mean drive OpenAI into bankruptcy? Then yes. They will win. Google ha…
rdc_nsetqku
G
It's written in the Bible centuries ago. AI is a dangerous tool & the final sta…
ytc_Ugyd9aOUq…
G
This shows the ignorance of this girl. Black Mirror stuff? We have been talking …
rdc_feimvob
G
Hey Sagar! I am a boomer and I don’t appreciate your stereotyping of me! I can’t…
ytc_UgwvknzPb…
G
The end result of all of this is for the less than 1% of the world’s population …
ytc_Ugz1XXy28…
Comment
That's not a good future-proof strategy for containment or for your future sleep. They're already running locally, albeit weaker than the big ones, so they'll be spread out over millions of computers in the future. There are 10-100 labs around the world running big ones on hundreds of data centers, and likely more are coming. Additionally, an AI can just move through the global network or they can trick us if they're smart enough, maybe even hide their digital tracks (if they even need to at that point). They don't need to trick a lot of people either, just a few.
Once AI is used in all devices and services in a few years switching them off will be very costly as well, not everyone will want to do it.
We need to figure out a control mechanism of some sort, but we don't have one right now, that's the problem.
youtube
AI Governance
2023-05-02T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyM8sNyt8XIeqJ0pIZ4AaABAg.9pD19C0wnHd9pDNQHRWHTf","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw7CZz0ZfLanO_l5s94AaABAg.9pD0e3Da8lc9pD87aaPefr","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugw7CZz0ZfLanO_l5s94AaABAg.9pD0e3Da8lc9pDcL-G5u20","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugy9vG15MEv0v1SHnxJ4AaABAg.9pD0XV6jBXH9pDBnVeO880","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw-tzkpOAX1Ib81iL94AaABAg.9pD-Z1TLtrG9pD46hzIf2P","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxIgt3ibgFkJEEf7uZ4AaABAg.9pD-U_PxnRa9pD0v5q9N9r","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxIgt3ibgFkJEEf7uZ4AaABAg.9pD-U_PxnRa9pDGpJsb6Ti","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxhpxvvzdhU-sB5gaZ4AaABAg.9pCy2CDaHZP9pD7e76Ji28","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugy050FUZ5iFygzIv8F4AaABAg.9pCy11hRotw9pD1WQ3C367","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw1K3MUQeBnT4Mcvc94AaABAg.9pCxmDUVmpF9pD-YzqwQUk","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]