Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
🌐 E. International Order & Treaty Systems
Has international law been effectively…
ytc_Ugz0OAqGk…
G
The scariest recent development in AI is that Neuro sama circumvented her filter…
ytc_UgzjZbYAr…
G
Sounds pre programmed like open AI but put it in a classroom with multiple screa…
ytc_UgzP2Ay3L…
G
I feel exactly the same way as witty...what jobs will be available in a world ru…
ytc_Ugy6y9IHL…
G
Another Doomsday advertising by and for Silicon Valley. Don’t buy it: AI halluci…
ytc_Ugxd1yjg_…
G
this is garbage, handful poeple want to control the use case, the O/P as they se…
ytc_Ugw1oWeMq…
G
The company I work for recently put me on a team dedicated to learning about how…
rdc_mthi5im
G
bypassing ai detection with just prompts doesn’t always work. after using chatgp…
ytc_Ugx5xbMAa…
Comment
What disgusts me now after doing some research is how many private (possibly "shell") companies AND universities alike are contributing to the development of drones similar to these (Perdix drone, Dynetics X-61, etc.) as well as their intelligence systems. Like they know darn well what the D.O.D and DARPA intend to do with these: nefarious purposes just like the CIA and FBI in the name of "national security" for the last 80 decades. I think I might write an email, place a call, or make a physical visit to the campus of Texas A&M and asking why they are getting involved. Also, you guys should look more into the D.O.D's current loophole in how (semi)autonomous weapon systems must require "human judgement over the use of force." THANKFULLY, in 2018 a congressional research committee wrote a paper exploiting the fact that thus "human judgement" does not require "manual human control" of the weapon systems, but rather only broader human involvement in decisions of where, when, and why the weapon will be deployed. To me, I interpret that as saying it would be acceptable for an AI-system that "operates by human judgements" (and was coded by human judgement) to make these decisions with not a single person directly involved of carrying out the process, just begging one autonomous program to control another. AND WHO DECDICES WHETHER AN AI SYSTEM TRULY RUNS SIMILAR TO HUMAN JUDGEMENT?!
like how is anybody ok with any of this...
youtube
AI Harm Incident
2025-09-28T22:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyQFSEL1NwAly6aV794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxgGW5iNt2VF9esAqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx-p_bQ6lHOeYcrQpR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyxdy3YTxKrKVcFLoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx9-_BX3vf0jCeKJkF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7jaXoNXp5kc97l0l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyvybHal9W56yW4yvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgywrRsa8cWMcRWlE4l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJOA4lT931neyRVzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw24sSvLNXQcv7gdZR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]