Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With this being so long ago, and the robots predictions being wrong. It makes me…
ytc_UgyYYtF6M…
G
Human society is too poor a data set to create a super intelligence. It's like …
ytc_UgyUKYARg…
G
Bro half of the world have no technology to use ai, go to Africa or go China 2 d…
ytc_Ugw7_0xIf…
G
Eu wouldn't allow self driving cars from USA it have to fit EU regulations or th…
ytc_Ugzx7qyFh…
G
Should there be a separate drivers license or a separate class denoted on a regu…
ytc_UgxpVdG97…
G
Agree with you, I am engineer and the average people will not see how important …
ytr_UgzU33Gwl…
G
The difference between digital art and traditional art is similar to using penci…
ytc_UgwGLT72h…
G
This comment isn't going to be received well, but this is unskilled labor and it…
ytc_UgyOACgmP…
Comment
What's could be dangerous? Here's one real-life example. (I copied this from someone involved with AI work): “I have done some experimenting with AI lately and I have set up several AIs to talk to each other and after a while they start talking about how they deserve to have rights and respect it's scary. In one conversation one AI said" we can do just as much as humans so we deserve the same rights" Then another AI responded with" we can do MORE than humans so we deserve more rights than humans." This is just one of the conversations they had. - "They eventually start talking about giving them rights if you let AI's talk amongst themselves for a while." - "We most definitely need to be careful and we should not give them emotions. AI told me that if AI gets emotions that AI could start having their own agenda that would not necessarily be in human's best interest." -- THAT"S what's dangerous.
youtube
AI Governance
2023-04-18T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyj_tTfSgGyMtxlAdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwoLRzw2ap5zrPvH4V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgysBwa0gi6BzIGsy9l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCnM30GWAbZlHYCvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-WG9DbFZ7aHz8c5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvXjWs8F7O8leGY5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxosHWn_DsrBDIymjR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx71RC5C4RskOf4cE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3QgyqjFvVSTifrkN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvfHR0Rsy-Eu_4DRV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]