Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fear of someone knowing how many and what I did with all my rengoku ai's…
ytc_UgzbCOcpH…
G
AI will be Used to LIMIT Human intellectual ability and be controlled by AI robo…
ytc_UgzisZvQz…
G
Me: "Chatgpt, create a new prequel trilogy that doesn't enrage the fans"
AI: "R…
ytr_UgxEmlOce…
G
Anyone who dares question AI’s ability to take everyones job is seen as an AI sk…
rdc_mxyhmng
G
Yes Ban super intelligence. We need to have time to deal with what’s been creat…
ytc_UgwbBviPg…
G
I gave AI a go as a last minute revision tool before an exam... and it was usefu…
ytc_UgwnC3ECI…
G
If ai do all the work who gonna buy the stuff? Like the rich aren't gonna make a…
ytc_UgxOVIBrt…
G
LOL
After using GPT I am actually convinced that LLMs not conciuos!
But I am jus…
ytc_UgyMQWcHk…
Comment
There are several reasons why some people believe that the government should regulate artificial intelligence. One reason is that AI technology is advancing rapidly, and there is concern that it could have negative impacts on society if left unchecked. For example, some worry that AI could lead to widespread job loss, inequality, or even pose a threat to national security.
Another reason is that AI technology is often developed by private companies, and there is concern that these companies may prioritize their own interests over the public good. Government regulation could ensure that AI is developed and used in ways that benefit society as a whole.
Finally, there is concern about the ethical implications of AI, such as issues related to bias, privacy, and accountability. Government regulation could help ensure that these issues are addressed in a responsible and transparent manner. Straight from ChatGPT
youtube
2023-04-10T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxLjvUneNat8fkG4s14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOwxJHWFBhcCAAvFd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzzX9g_xKIW0iar3IN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5YacrYVi2U0yhwlR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7EMXbvfU-BR0jsjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxnXHqzz4v0o-WA8dB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw__MalJqFf6_TS4ZJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx2ouboXQhhoyaKAip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqFabGa5ewHyvwJwh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxU-sHsAkxI0n8r1A54AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]