Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once artificial intelligence takes over current activity we're just going to get…
ytc_UghcxWyXl…
G
@Excalibur-r1g Still though, only thousands of images won't make too much of a d…
ytr_UgwAU8I7Q…
G
Doctors Don't Know NOTHING. Ai will be alot better❤❤❤ Doctors ONLY work for Mone…
ytc_Ugwt9qbVG…
G
If you read Revelation 9 you will see what I believe is AI being talked about. I…
ytc_UgyNS4mW9…
G
Why redraw ai and not going out side taking a beautiful picture and redrawing th…
ytr_UgyzH1Tvi…
G
What is not discussed anywhere so far as I'm aware, is how AI has no empathy or …
ytc_UgxjzA5pt…
G
Ugh, this feels like AI proaganda and it's unfortunate because you would expect …
ytc_UgwK9PQER…
G
Let’s think more logically, because Dr. Roman Yampolskiy seems to have a strange…
ytc_Ugw6zpbZp…
Comment
The only thing to fear from AI would be the intentions of the people who program it. If something is programmed to manipulate and spread misinformation then we are in trouble. Kinda like Fox news, and Tucker would be out of a job. I am less worried about actually thinking machines( which probably wont happen for a long time), what I really worries me is algorithms being used to convince and manipulate the masses. Don't trust Elon.... he loves taking credit for other people's work.
youtube
AI Governance
2023-04-20T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgysLscr16tq0tu71Q14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwrlL1eNSre2Pda3754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyPZUNbAdFsVIpEPwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwg2tav6471ERAg7eN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw7uWq6XWzD3Ruv5NV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_Ugypk6WeTtl_k5PMbq54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxVUnZc_HAd9RWgqpl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyO56CidKyonctrUcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRo1B2I24pGiljDs14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBsMMlv_oAY--WBcF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]