Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It shocks me that there are people who seriously defend AI and think there is no…
ytc_UgxCNDlsd…
G
Meanwhile, AI data centres are taking the water and power for entire cities. Lit…
ytc_UgyBEDFPe…
G
Look, im someone who is pro-ai, but the whole 'image scraping' thing feels like …
ytc_UgwDTpQdT…
G
i think the "experts" have no idea how AI works! its by no means word prediction…
ytc_UgzpRUj8M…
G
Skynet...looks kind of video editing or. CGI, cuz in 2003 not sure if AI was ava…
ytc_Ugy20dYWK…
G
AI is a good tool, but don't forget, we got here thru the millennia by having hu…
ytc_UgyjeI2yV…
G
No robots won't take over the world. They will be used in these scenarios: WW3 w…
ytc_Ugyc_K8A4…
G
Maybe learn the AI pain. So that it can understand the human struggle. That’s th…
ytc_Ugyvptbqg…
Comment
What's the point? The key ingredients are open source, and whatever regulations they come with won't apply worldwide, only in the US.
I can see countries like China, Russia, India, and many more going full steam towards a non-regulated super 'product' to be the first country to achieve it.
Honestly, I see this as the same as the race for nuclear weapons, except times have changed, and people want more regulations and laws to feel 'safer' and have some feeling of control.
But imagine a not-so-friendly country achieving first an AI so smart and powerful that it can literally shut down a country, like making you unable to use your nukes and any sort of defense systems. What do you do now? Ask for help? Release secret weapons?
And it might be extreme cases scenarios, but don't tell me you're 100% sure that there isn't a country leader out there who would not get a little cockier if he had that sort of technology to force some of his decisions on others.
And i can't help to think about Hiroshima and Nagazaki.
youtube
AI Governance
2023-05-17T12:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCNn21Seu2W58Er7B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxemm2rUIMFVpWLCTN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6AALSe0UMQ2e3XCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxpxvyRgrb4HhxxU0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzig2F_2CMhVQTl-n14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkkLSbm7zBiqSGAiB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugx-XXO4_ihxHTKlrgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVJz32SyODxGWMcyZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFyT6_Y-QU69Fv1Z94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnIwr_3AmGBm5l6X14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]