Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Using AI to *assist* you in ways like giving you ideas and researching for you i…
ytc_UgyMfoCdK…
G
How do you make a longer than 8-minute video? And if there are multiple videos, …
rdc_mubv42y
G
You seem to think those running things give a crap what you or anyone else has t…
ytr_UgwBAqdmg…
G
Give it some time and the ai will be so powerful to eventually create things wit…
ytc_UgzRU-DU2…
G
We appreciate your feedback. If you're interested in exploring more about advanc…
ytr_Ugx9xTcIg…
G
Electricians and plumbers can make over $100K and half of them will be retired i…
rdc_l4rc66n
G
Too realistic, give it an anime girl s face and it will be cheaper and will sell…
ytc_UgzAH3Npe…
G
AI is so tricky, I love all the benefits I'm getting from using it, it has saved…
ytc_Ugyfj-PwN…
Comment
We haven't addressed our own psychological behaviour, AI won't kill us, we will kill ourselves through or with AI. Will individual countries develop AI to attack through the infrastructure of their own military institutions? or will individual countries develop AI for the greater good, and will that greater good be accepted by those in power
My worry is with the person who wants to control or own the world, the god complex is distorting for that individual, and saying we are going to war has and still is a threat to anybody on the receiving end.
Who knows what is right or wrong in this day and age, as we all still support the war machines and the development. AI is developed by DARPA and that is basically to advance the military, along with GPS, RADAR and the INTERNET. So I am worried that this developed AI systems origin is from the US Defence Advanced Research Projects Agency.
youtube
AI Governance
2025-12-30T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBcZeta45daj3v8S54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjZfkKi8kOttzfp-R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyRL8KGBsFbr9JfkXN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzqEWLQnN0V3y9sszx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugzp6PgNx0-eWzaSUEV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyWKwUQtzNQOsRG0n14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLk05uupAQ8SPcgwV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgwhU8ABlqq9h1XEW2t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyw2FHfvWEDGcFoFmZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxuTYYc9d_d13ObIlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]