Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I personally don't use AI in my artworks myself, because I am way too perfectio…
ytc_Ugxm2cucS…
G
Those who argue for AI art don’t understand art, and the those who create it, wh…
ytc_UgxpgXhYF…
G
I asked ChatGpt some health questions a little over a yr ago. I thought I had o…
ytc_UgywBXQeG…
G
Ai “artist“ who say that are very VERY desperate to be seen as artist. Like bro …
ytc_Ugz89g6-O…
G
that one robot: “girl we aren’t🙄” and then does the side eye💀
(NO CAUSE I FORGO…
ytc_Ugz-uiIJ2…
G
All products will be made for the elite while biological humans die off, I’m sur…
ytr_UgzSzgmHJ…
G
One question. If AI is going to replace the working class, then who is going to …
ytc_UgxnRlPnM…
G
Regarding the second point, I'm afraid that all of it is incorrect.
1) Your sto…
ytc_UgzrG1rhn…
Comment
Steven don't be too hard on these doubters I think the idea of stopping research on AI for 6 months makes no sense unless you put up a plan of how you will solve the problem of 'is AI safe or not' in 6 in a months period. To say stop for 6 months without a plan how to get the answer is pointless. You have to come up with rules like Asimov's Laws...
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Or some way forward, it should be something the governments control but they are followers not leads.
youtube
AI Governance
2023-03-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPxuxYH0szCQqktCB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaImA1nKCKeN9MGMx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdE27xvBfqfnFSSEZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwCUYYYbRBE_1bHQJZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwEpeoR667atw8o1Pp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzvPDb_6vnhAYnqAM54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvLJEGUMj3nGPvNvZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJHXbIM3EovXUUIFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwReWRuaulaks0XV214AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKnBNGkfLJoUXCUVt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]