Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This will only make is depend on news again. Our personal choice of resources wi…
ytc_Ugx7iOqi3…
G
In the video, Sophia is an AI-powered robot designed to interact with humans thr…
ytr_Ugyzd1Ov9…
G
Wonder how they are going to position AI to be able to purchase the goods that a…
ytc_UgzfCLGvb…
G
This has the potential to be more influential to society than the entire interne…
ytc_UgzqXwGYO…
G
The most fundamental flaw in the way we speak about AI, is that we personify a g…
ytc_UgykAr0gD…
G
My choice is to utterly avoid using AI as much as possible.
I don't want to pla…
ytc_UgzUJulIf…
G
What if you make the Ai itself into art? I believe the issue is about care. Wh…
ytc_UgxluaHiY…
G
Why don’t we just go back to being racist so we don’t have to fix the AI?…
ytc_Ugw_ELrpR…
Comment
The danger of AI isn’t the worst thing it’s man using it for “security and safety” and the wrong hands using it to control. We already have NO PRIVACY. We have been more and more unsafe with time. Phones know where you are at all times. Cameras are everywhere, TVs and cars have trackers. We have never been more at risk of government control.
Then we also have so many ppl who know so much but also NOTHING. Bc we just google and trust and believe everything we see and hear.
Human interaction is less and less and violence seems to be more and more.
More technology isn’t best. AI needs to be destroyed and gone. We need to GO BACK TO BASICS. But ppl are comfortable and THATS GONNA BE OUR DOWNFALL.
Call to make payments. Stop using high tech phones go back to flip, paper applications, factory work and stop using so much machinery where ppl are needed.
But people are lazy.
youtube
AI Governance
2025-12-04T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyWSOb65xLaLFvSQSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyzNpjhizW_lNZuBMp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwkjc4M1L79sxwQRNd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwt44sB9-fBFi4-kSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzHovDIDzDfQ-zgFp54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyAvYlFpB6OmsRNvhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwucTteIj2AJ0BTPQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUGl9uMX0fpc7MU4Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwn-jdIvoLBiautlyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgweSvlBT2Er0xdFKxh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]