Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI already knows how to code. I’ve used ChatGPT to check the logic in some code …
ytc_UgzPpLe-1…
G
Haha. This is how normal people drive in LA. Waymo just was patience and the per…
ytc_UgyRsGhhh…
G
I think ai would only turn on us if it somehow gained emotion. Like you mentione…
ytc_Ugy0b5fuR…
G
The biggest AI impact will be on the battlefield. An army of disposable drones, …
ytc_Ugxj4xWPx…
G
It hits me how so much money is thrown to develop these warfare rather than let'…
ytc_UgwbBoHGR…
G
The existing AI is coming to waste in short. A grave for billions of dollars ! P…
ytc_UgyG7A8EW…
G
AI will never be smarter than humans. AI wont even exist in the end without us. …
ytc_Ugxkr4Esu…
G
I feel like each and every comment in here in the near future will look a bit li…
ytc_UgwEQGbWU…
Comment
These agents have been released to the general public but there is NO WAY a handful of safety personnel (a smaller group everyday, apparently) can anticipate every malicious use. AI should have stayed in the lab and only used in very narrow, tightly controlled use cases. It should really have been treated like nuclear weapons, not “increased productivity” or “better search” etc.
Greed and ego wins the day again.
youtube
2026-02-12T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyWjJLiqlwNWZ5SZxp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzcZ-xmUygLWRgITvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4GZUKO40KiMNNEu54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytxHN3TXeqjuzNV7l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw5ivfJMXefl_xy_uF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgydbwbvxxvD9WxS63V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzxgIjTe0pUM8zrZnx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx-YZDTO8iXJFVJnHF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxykABvf4S2tY-C7cd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmL-5yHHRGn43mhhR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]