Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Me when I run LLaMa on a private instance with no wireless chips interred: (i us…
ytc_Ugx21iejh…
G
and yet ai is still more trustworthy as a news source than DW, imagine that!…
ytc_Ugz-GDny2…
G
Book recommendation: "A Brief Guide of 12 Strategies to Minimize the Adverse Imp…
ytc_UgwEjdDwG…
G
Evidently 99% of the people in these comments think we are in the early days of …
ytc_UgyYEw4MC…
G
She said what I have been wanting to say. These tech bros and others will say, o…
ytc_Ugz22ykQ1…
G
Ty. I was telling WAY too much to chatgpt so that might be the reason.…
rdc_oa1pp0y
G
Sigh. SO STUPID. Our regulatory state isn’t even supposed to exist, and these de…
ytc_UgzSkL_I2…
G
If you can prove X didnt do enough to stop illegal images of women so called dee…
ytc_UgwSxuMx1…
Comment
There are way too many safeguards to go full Skynet so that part of this clip is silly but the use of AI by the military scares the hell out of me. Games like Call of Duty make a war with drones seem survivable. It is not. AI has the potential to be a devastating weapon. We were taught that nuclear war was survivable by hiding under our school desks. Like nuclear weapons, AI needs to be extremely limited in military applications.
youtube
2018-04-06T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgymU-_jZ6AYNzHLXyl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwoJMQsJK4l7_qTYY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1opUJZPpTaCzixfV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwddz6KSwtidh3OXJ94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBgsq51EU3Ab9SDiZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwQKXEkh_sVhciDqrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7iGQSIGqblk8SSMh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugybg98pKjriV4QjR2h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5hRf__vHqBl_Hxjt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuorSzfY1wjp2W3dx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]