Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humanity is doomed. AI will destroy Humanity by used by humans. We are destroyin…
ytc_UgypBYEOP…
G
It will be illegal soon. As soon as a deepfake is made of some politician doing …
ytr_UgxcwBDai…
G
I'm in support of ai art, cause by this everyone can make their imagination come…
ytc_UgyewzZPh…
G
Never under estimate IA bionic commando robot that is taught how to fight and de…
ytc_Ugzp1f5oM…
G
Plenty of those instagram “artists” and when you look at their work its all AI g…
ytc_UgyvkyNWa…
G
Well at least the people protesting are aware of the dangers and people here are…
rdc_ntb56wx
G
AI art all seems to have a similar 'flavor.' I don't believe AI, at its current …
ytc_Ugwv_YJ2K…
G
A relevant and almost universal example of why this wont exist for at least 2-3 …
rdc_fct0kql
Comment
As an AI hobbyist, I do think that a lot of the doomsday predictions around AI are overblown at this point. But I do think that it would absolutely be prudent to establish some ground rules for ethical development. The Congressional hearings were just for show... there would be no point in the US alone regulating development and powerful US companies will keep Congress from doing anything against them while the rest of the world is free to do as they please. We need to see something along the lines of the IAEA established through international treaties that can regulate state-sponsored and corporate development.
Ironically, it would make sense for a regulating body to have a sort of police AI. One that could scour the web for mentions of major developments, and coordinate to efficiently evaluate those emerging AIs...through code reviews to flag things of concern for human eyes, or whatever seems appropriate. Of course, that AI to rule them all seems like it could become exactly what we're afraid of...well-resourced, broad reach, learning from novel developments in others... -_-
youtube
AI Governance
2023-07-07T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyBQI1tTmqV1VWD6qh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzVZcml9rQWGyZmXV14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgykOeVtB1VEfrIDNaB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwzjK8EMspKs5AiQSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQlV3uMil2LmLTZOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxY8mnPAUvOTjcrSX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxD1MFK4aqVGkJJ1Ud4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyfW___-M5sNLxwTKh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9WF67iFH-zEQFYpR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwSoxHe3C5hLyFFUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]