Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The doom and gloom of AI stuff is pretty tiring, especially when those pushing i…
ytc_UgxWckLiA…
G
I would imagine it essentially integrates the thinking ability of a traditional …
rdc_ohxbyx6
G
While I agree that large data centers are huge energy and water consumers, some …
ytc_UgwC5MFcC…
G
Welcome to my world. I’ve been in the video surveillance industry for over 40 y…
ytc_UgyLvPXqj…
G
how to defeat ai artist's ego:
step 1: ask them to draw something on paper
step …
ytc_Ugyp9E7Jg…
G
Mb the artists are just lazy
Like I'm sorry for you to be it now, but some day …
ytc_UgzGWc-s1…
G
You not trusting driverless trucks is equivalent to you saying you don't make ed…
ytr_UgyMSzErl…
G
They will put as much effort into creating AI that will not take over from us, a…
ytc_UgyRDQOFV…
Comment
This letter is a nice gesture nothing more. Here we are talking about Labs and corporate competition, and only those that are known about and might comply, what about those labs that are functioning under the radar are they likely to pay any heed to this document? As for the idea that "governments should step in and institute a moratorium", even more of a joke - does America trust Russia or China to implement moratoriums against their own labs? Do Russia or China trust America to do likewise? This power that people believe AI possess is something every able government wants to get to first, just like the atom bomb. It may be to their own downfall but they're not going to let any other government get there first and force their capitulation. I'm an optimist too and the rate at which AI is advancing is frightening, and it would be great if the world took a time-out, but do you trust your opponents to do the same?
youtube
AI Governance
2023-03-30T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPxuxYH0szCQqktCB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaImA1nKCKeN9MGMx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdE27xvBfqfnFSSEZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCUYYYbRBE_1bHQJZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwEpeoR667atw8o1Pp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzvPDb_6vnhAYnqAM54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvLJEGUMj3nGPvNvZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJHXbIM3EovXUUIFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwReWRuaulaks0XV214AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKnBNGkfLJoUXCUVt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]